[PATCH 20/24] dma-buf: add DMA_RESV_USAGE_KERNEL

Christian König ckoenig.leichtzumerken at gmail.com
Thu Mar 3 13:49:22 UTC 2022


Am 02.03.22 um 19:11 schrieb Jason Ekstrand:
> On Wed, Dec 22, 2021 at 4:05 PM Daniel Vetter <daniel at ffwll.ch> wrote:
>
>     On Tue, Dec 07, 2021 at 01:34:07PM +0100, Christian König wrote:
>     > Add an usage for kernel submissions. Waiting for those
>     > are mandatory for dynamic DMA-bufs.
>     >
>     > Signed-off-by: Christian König <christian.koenig at amd.com>
>
>     Again just skipping to the doc bikeshedding, maybe with more cc others
>     help with some code review too.
>
>     >  EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
>     > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
>     > index 4f3a6abf43c4..29d799991496 100644
>     > --- a/include/linux/dma-resv.h
>     > +++ b/include/linux/dma-resv.h
>     > @@ -54,8 +54,30 @@ struct dma_resv_list;
>     >   *
>     >   * This enum describes the different use cases for a dma_resv
>     object and
>     >   * controls which fences are returned when queried.
>     > + *
>     > + * An important fact is that there is the order
>     KERNEL<WRITE<READ and
>     > + * when the dma_resv object is asked for fences for one use
>     case the fences
>     > + * for the lower use case are returned as well.
>     > + *
>     > + * For example when asking for WRITE fences then the KERNEL
>     fences are returned
>     > + * as well. Similar when asked for READ fences then both WRITE
>     and KERNEL
>     > + * fences are returned as well.
>     >   */
>     >  enum dma_resv_usage {
>     > +     /**
>     > +      * @DMA_RESV_USAGE_KERNEL: For in kernel memory management
>     only.
>     > +      *
>     > +      * This should only be used for things like copying or
>     clearing memory
>     > +      * with a DMA hardware engine for the purpose of kernel memory
>     > +      * management.
>     > +      *
>     > +         * Drivers *always* need to wait for those fences
>     before accessing the
>
>
> super-nit: Your whitespace is wrong here.

Fixed, thanks.

>     s/need to/must/ to stay with usual RFC wording. It's a hard
>     requirement or
>     there's a security bug somewhere.
>
>
> Yeah, probably.  I like *must* but that's because that's what we use 
> in the VK spec.  Do whatever's usual for kernel docs.

I agree, must sounds better and is already fixed.

>
> Not sure where to put this comment but I feel like the way things are 
> framed is a bit the wrong way around. Specifically, I don't think we 
> should be talking about what fences you must wait on so much as what 
> fences you can safely skip.  In the previous model, the exclusive 
> fence had to be waited on at all times and the shared fences could be 
> skipped unless you were doing something that would result in a new 
> exclusive fence.

Well exactly that's what we unfortunately didn't do, as Daniel explained 
some drivers just ignored the exclusive fence sometimes.

> In this new world of "it's just a bucket of fences", we need to be 
> very sure the waiting is happening on the right things.  It sounds (I 
> could be wrong) like USAGE_KERNEL is the new exclusive fence.  If so, 
> we need to make it virtually impossible to ignore.

Yes, exactly that's the goal here.

>
> Sorry if that's a bit of a ramble.  I think what I'm saying is this:  
> In whatever helpers or iterators we have, be that get_singleton or 
> iter_begin or whatever, we need to be sure we specify things in terms 
> of exclusion and not inclusion.  "Give me everything except implicit 
> sync read fences" rather than "give me implicit sync write fences".

Mhm, exactly that's what I tried to avoid. The basic idea here is that 
the driver and memory management components tells the framework what use 
case it has and the framework returns the appropriate fences for that.

So when the use case is mmap() the buffer on the CPU without any further 
sync (for example) you only get the kernel fences.

When the use case is you want to add a CS which is an implicit read you 
get all kernel fences plus all writers (see function dma_resv_usage_rw).

When the use case is you want to add a CS which is an implicit write you 
get all kernel fences, other writers as well as readers.

And last when you are the memory management which wants to move a buffer 
around you get everything.

>   If having a single, well-ordered enum is sufficient for that, 
> great.  If we think we'll ever end up with something other than a 
> strict ordering, we may need to re-think a bit.

I actually started with a matrix which gives you an indicator when to 
sync with what, but at least for now the well-ordered enum seems to get 
the job done as well and is far less complex.

> Concerning well-ordering... I'm a bit surprised to only see three 
> values here.  I expected 4:
>
>  - kernel exclusive, used for memory moves and the like
>  - kernel shared, used for "I'm using this right now, don't yank it 
> out from under me" which may not have any implicit sync implications 
> whatsoever
>  - implicit sync write
>  - implicit sync read

See the follow up patch which adds DMA_RESV_USAGE_BOOKKEEP. That's the 
4th one you are missing.

> If we had those four, I don't think the strict ordering works 
> anymore.  From the POV of implicit sync, they would look at the 
> implicit sync read/write fences and maybe not even kernel exclusive.  
> From the POV of some doing a BO move, they'd look at all of them.  
> From the POV of holding on to memory while Vulkan is using it, you 
> want to set a kernel shared fence but it doesn't need to interact with 
> implicit sync at all.  Am I missing something obvious here?

Yeah, sounds like you didn't looked at patch 21 :)

My thinking is more or less exactly the same. Only difference is that 
I've put the BOOKKEEP usage after the implicit read and write usages. 
This way you can keep the strict ordering since the implicit submissions 
won't ask for the BOOKKEEP usage.

The order is then KERNEL<WRITE<READ<BOOKKEEP. See the final 
documentation here as well:

  * An important fact is that there is the order 
KERNEL<WRITE<READ<BOOKKEEP and
  * when the dma_resv object is asked for fences for one use case the fences
  * for the lower use case are returned as well.
  *
  * For example when asking for WRITE fences then the KERNEL fences are 
returned
  * as well. Similar when asked for READ fences then both WRITE and KERNEL
  * fences are returned as well.

Regards,
Christian.


>
> --Jason
>
>
>     > +      * resource protected by the dma_resv object. The only
>     exception for
>     > +      * that is when the resource is known to be locked down in
>     place by
>     > +      * pinning it previously.
>
>     Is this true? This sounds more confusing than helpful, because
>     afaik in
>     general our pin interfaces do not block for any kernel fences.
>     dma_buf_pin
>     doesn't do that for sure. And I don't think ttm does that either.
>
>     I think the only safe thing here is to state that it's safe if a) the
>     resource is pinned down and b) the callers has previously waited
>     for the
>     kernel fences.
>
>     I also think we should put that wait for kernel fences into
>     dma_buf_pin(),
>     but that's maybe a later patch.
>     -Daniel
>
>
>
>     > +      */
>     > +     DMA_RESV_USAGE_KERNEL,
>     > +
>     >       /**
>     >        * @DMA_RESV_USAGE_WRITE: Implicit write synchronization.
>     >        *
>     > --
>     > 2.25.1
>     >
>
>     -- 
>     Daniel Vetter
>     Software Engineer, Intel Corporation
>     http://blog.ffwll.ch
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20220303/9aa4b03a/attachment.htm>


More information about the dri-devel mailing list