[PATCH 1/5] dma-buf: fix dma_resv_test_signaled test_all handling

Daniel Vetter daniel at ffwll.ch
Fri Jun 11 14:47:58 UTC 2021


On Fri, Jun 11, 2021 at 02:02:57PM +0200, Christian König wrote:
> As the name implies if testing all fences is requested we
> should indeed test all fences and not skip the exclusive
> one because we see shared ones.
> 
> Signed-off-by: Christian König <christian.koenig at amd.com>

Hm I thought we've had the rule that when both fences exist, then
collectively the shared ones must signale no earlier than the exclusive
one.

That's at least the contract we've implemented in dma_resv.h. But I've
also found a bunch of drivers who are a lot more yolo on this.

I think there's a solid case here to just always take all the fences if we
ask for all the shared ones, but if we go that way then I'd say
- clear kerneldoc patch to really hammer this in (currently we're not good
  at all in this regard)
- going through drivers a bit to check for this (I have some of that done
  already in my earlier series, need to respin it and send it out)

But I'm kinda not seeing why this needs to be in this patch series here.
-Daniel

> ---
>  drivers/dma-buf/dma-resv.c | 33 ++++++++++++---------------------
>  1 file changed, 12 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index f26c71747d43..c66bfdde9454 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -615,25 +615,21 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
>   */
>  bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
>  {
> -	unsigned int seq, shared_count;
> +	struct dma_fence *fence;
> +	unsigned int seq;
>  	int ret;
>  
>  	rcu_read_lock();
>  retry:
>  	ret = true;
> -	shared_count = 0;
>  	seq = read_seqcount_begin(&obj->seq);
>  
>  	if (test_all) {
>  		struct dma_resv_list *fobj = dma_resv_shared_list(obj);
> -		unsigned int i;
> -
> -		if (fobj)
> -			shared_count = fobj->shared_count;
> +		unsigned int i, shared_count;
>  
> +		shared_count = fobj ? fobj->shared_count : 0;
>  		for (i = 0; i < shared_count; ++i) {
> -			struct dma_fence *fence;
> -
>  			fence = rcu_dereference(fobj->shared[i]);
>  			ret = dma_resv_test_signaled_single(fence);
>  			if (ret < 0)
> @@ -641,24 +637,19 @@ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
>  			else if (!ret)
>  				break;
>  		}
> -
> -		if (read_seqcount_retry(&obj->seq, seq))
> -			goto retry;
>  	}
>  
> -	if (!shared_count) {
> -		struct dma_fence *fence_excl = dma_resv_excl_fence(obj);
> -
> -		if (fence_excl) {
> -			ret = dma_resv_test_signaled_single(fence_excl);
> -			if (ret < 0)
> -				goto retry;
> +	fence = dma_resv_excl_fence(obj);
> +	if (fence) {
> +		ret = dma_resv_test_signaled_single(fence);
> +		if (ret < 0)
> +			goto retry;
>  
> -			if (read_seqcount_retry(&obj->seq, seq))
> -				goto retry;
> -		}
>  	}
>  
> +	if (read_seqcount_retry(&obj->seq, seq))
> +		goto retry;
> +
>  	rcu_read_unlock();
>  	return ret;
>  }
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the amd-gfx mailing list