[PATCH 1/3] dma-buf/sw_sync: Avoid recursive lock during fence signal.

Christian König christian.koenig at amd.com
Wed Jul 15 08:57:38 UTC 2020


Am 14.07.20 um 22:06 schrieb Chris Wilson:
> From: Bas Nieuwenhuizen <bas at basnieuwenhuizen.nl>
>
> Calltree:
>    timeline_fence_release
>    drm_sched_entity_wakeup
>    dma_fence_signal_locked
>    sync_timeline_signal
>    sw_sync_ioctl
>
> Releasing the reference to the fence in the fence signal callback
> seems reasonable to me, so this patch avoids the locking issue in
> sw_sync.
>
> d3862e44daa7 ("dma-buf/sw-sync: Fix locking around sync_timeline lists")
> fixed the recursive locking issue but caused an use-after-free. Later
> d3c6dd1fb30d ("dma-buf/sw_sync: Synchronize signal vs syncpt free")
> fixed the use-after-free but reintroduced the recursive locking issue.
>
> In this attempt we avoid the use-after-free still because the release
> function still always locks, and outside of the locking region in the
> signal function we have properly refcounted references.
>
> We furthermore also avoid the recurive lock by making sure that either:
>
> 1) We have a properly refcounted reference, preventing the signal from
>     triggering the release function inside the locked region.
> 2) The refcount was already zero, and hence nobody will be able to trigger
>     the release function from the signal function.
>
> v2: Move dma_fence_signal() into second loop in preparation to moving
> the callback out of the timeline obj->lock.
>
> Fixes: d3c6dd1fb30d ("dma-buf/sw_sync: Synchronize signal vs syncpt free")
> Cc: Sumit Semwal <sumit.semwal at linaro.org>
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Gustavo Padovan <gustavo at padovan.org>
> Cc: Christian König <christian.koenig at amd.com>
> Cc: <stable at vger.kernel.org>
> Signed-off-by: Bas Nieuwenhuizen <bas at basnieuwenhuizen.nl>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>

Looks reasonable to me, but I'm not an expert on this container.

So patch is Acked-by: Christian König <christian.koenig at amd.com>

Regards,
Christian.

> ---
>   drivers/dma-buf/sw_sync.c | 32 ++++++++++++++++++++++----------
>   1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
> index 348b3a9170fa..807c82148062 100644
> --- a/drivers/dma-buf/sw_sync.c
> +++ b/drivers/dma-buf/sw_sync.c
> @@ -192,6 +192,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
>   static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
>   {
>   	struct sync_pt *pt, *next;
> +	LIST_HEAD(signal);
>   
>   	trace_sync_timeline(obj);
>   
> @@ -203,21 +204,32 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
>   		if (!timeline_fence_signaled(&pt->base))
>   			break;
>   
> -		list_del_init(&pt->link);
> -		rb_erase(&pt->node, &obj->pt_tree);
> -
>   		/*
> -		 * A signal callback may release the last reference to this
> -		 * fence, causing it to be freed. That operation has to be
> -		 * last to avoid a use after free inside this loop, and must
> -		 * be after we remove the fence from the timeline in order to
> -		 * prevent deadlocking on timeline->lock inside
> -		 * timeline_fence_release().
> +		 * We need to take a reference to avoid a release during
> +		 * signalling (which can cause a recursive lock of obj->lock).
> +		 * If refcount was already zero, another thread is already
> +		 * taking care of destroying the fence.
>   		 */
> -		dma_fence_signal_locked(&pt->base);
> +		if (!dma_fence_get_rcu(&pt->base))
> +			continue;
> +
> +		list_move_tail(&pt->link, &signal);
> +		rb_erase(&pt->node, &obj->pt_tree);
>   	}
>   
>   	spin_unlock_irq(&obj->lock);
> +
> +	list_for_each_entry_safe(pt, next, &signal, link) {
> +		/*
> +		 * This needs to be cleared before release, otherwise the
> +		 * timeline_fence_release function gets confused about also
> +		 * removing the fence from the pt_tree.
> +		 */
> +		list_del_init(&pt->link);
> +
> +		dma_fence_signal(&pt->base);
> +		dma_fence_put(&pt->base);
> +	}
>   }
>   
>   /**



More information about the dri-devel mailing list