[PATCH 11/23] dma-buf: drop the DAG approach for the dma_resv object v2

Daniel Vetter daniel at ffwll.ch
Wed Mar 23 13:40:16 UTC 2022


On Mon, Mar 21, 2022 at 02:58:44PM +0100, Christian König wrote:
> So far we had the approach of using a directed acyclic
> graph with the dma_resv obj.
> 
> This turned out to have many downsides, especially it means
> that every single driver and user of this interface needs
> to be aware of this restriction when adding fences. If the
> rules for the DAG are not followed then we end up with
> potential hard to debug memory corruption, information
> leaks or even elephant big security holes because we allow
> userspace to access freed up memory.
> 
> Since we already took a step back from that by always
> looking at all fences we now go a step further and stop
> dropping the shared fences when a new exclusive one is
> added.
> 
> v2: Drop some now superflous documentation
> 
> Signed-off-by: Christian König <christian.koenig at amd.com>
> ---
>  drivers/dma-buf/dma-resv.c | 16 +---------------
>  include/linux/dma-buf.h    |  7 -------
>  include/linux/dma-resv.h   | 22 +++++-----------------
>  3 files changed, 6 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 1c9af97fe904..4b12141579e2 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -358,35 +358,21 @@ EXPORT_SYMBOL(dma_resv_replace_fences);
>   * @fence: the exclusive fence to add
>   *
>   * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock().
> - * Note that this function replaces all fences attached to @obj, see also
> - * &dma_resv.fence_excl for a discussion of the semantics.
> + * See also &dma_resv.fence_excl for a discussion of the semantics.
>   */
>  void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
>  {
>  	struct dma_fence *old_fence = dma_resv_excl_fence(obj);
> -	struct dma_resv_list *old;
> -	u32 i = 0;
>  
>  	dma_resv_assert_held(obj);
>  
> -	old = dma_resv_shared_list(obj);
> -	if (old)
> -		i = old->shared_count;
> -
>  	dma_fence_get(fence);
>  
>  	write_seqcount_begin(&obj->seq);
>  	/* write_seqcount_begin provides the necessary memory barrier */
>  	RCU_INIT_POINTER(obj->fence_excl, fence);
> -	if (old)
> -		old->shared_count = 0;
>  	write_seqcount_end(&obj->seq);
>  
> -	/* inplace update, no shared fences */
> -	while (i--)
> -		dma_fence_put(rcu_dereference_protected(old->shared[i],
> -						dma_resv_held(obj)));
> -
>  	dma_fence_put(old_fence);
>  }
>  EXPORT_SYMBOL(dma_resv_add_excl_fence);
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index 7ab50076e7a6..74083e62e19d 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -420,13 +420,6 @@ struct dma_buf {
>  	 * - Dynamic importers should set fences for any access that they can't
>  	 *   disable immediately from their &dma_buf_attach_ops.move_notify
>  	 *   callback.
> -	 *
> -	 * IMPORTANT:
> -	 *
> -	 * All drivers must obey the struct dma_resv rules, specifically the
> -	 * rules for updating fences, see &dma_resv.fence_excl and
> -	 * &dma_resv.fence. If these dependency rules are broken access tracking
> -	 * can be lost resulting in use after free issues.

Uh that's a bit much. I do think we should keep this, and update it to
point at whatever new dma_resv fence slot rules you're adding. Maybe just
keep the first part like:

	 * All drivers must obey the struct dma_resv rules, specifically the
	 * rules for updating and obeying fences.

With that

Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>

>  	 */
>  	struct dma_resv *resv;
>  
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 20e13f36710a..ecb697d4d861 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -93,23 +93,11 @@ struct dma_resv {
>  	 *
>  	 * The exclusive fence, if there is one currently.
>  	 *
> -	 * There are two ways to update this fence:
> -	 *
> -	 * - First by calling dma_resv_add_excl_fence(), which replaces all
> -	 *   fences attached to the reservation object. To guarantee that no
> -	 *   fences are lost, this new fence must signal only after all previous
> -	 *   fences, both shared and exclusive, have signalled. In some cases it
> -	 *   is convenient to achieve that by attaching a struct dma_fence_array
> -	 *   with all the new and old fences.
> -	 *
> -	 * - Alternatively the fence can be set directly, which leaves the
> -	 *   shared fences unchanged. To guarantee that no fences are lost, this
> -	 *   new fence must signal only after the previous exclusive fence has
> -	 *   signalled. Since the shared fences are staying intact, it is not
> -	 *   necessary to maintain any ordering against those. If semantically
> -	 *   only a new access is added without actually treating the previous
> -	 *   one as a dependency the exclusive fences can be strung together
> -	 *   using struct dma_fence_chain.
> +	 * To guarantee that no fences are lost, this new fence must signal
> +	 * only after the previous exclusive fence has signalled. If
> +	 * semantically only a new access is added without actually treating the
> +	 * previous one as a dependency the exclusive fences can be strung
> +	 * together using struct dma_fence_chain.
>  	 *
>  	 * Note that actual semantics of what an exclusive or shared fence mean
>  	 * is defined by the user, for reservation objects shared across drivers
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list