[RFC] replacing dma_resv API

Daniel Vetter daniel at ffwll.ch
Wed Aug 21 20:05:34 UTC 2019


On Wed, Aug 21, 2019 at 06:13:27PM +0200, Daniel Vetter wrote:
> On Wed, Aug 21, 2019 at 02:31:37PM +0200, Christian K├Ânig wrote:
> > Hi everyone,
> > 
> > In previous discussion it surfaced that different drivers use the shared
> > and explicit fences in the dma_resv object with different meanings.
> > 
> > This is problematic when we share buffers between those drivers and
> > requirements for implicit and explicit synchronization leaded to quite a
> > number of workarounds related to this.
> > 
> > So I started an effort to get all drivers back to a common understanding
> > of what the fences in the dma_resv object mean and be able to use the
> > object for different kind of workloads independent of the classic DRM
> > command submission interface.
> > 
> > The result is this patch set which modifies the dma_resv API to get away
> > from a single explicit fence and multiple shared fences, towards a
> > notation where we have explicit categories for writers, readers and
> > others.
> > 
> > To do this I came up with a new container called dma_resv_fences which
> > can store both a single fence as well as multiple fences in a
> > dma_fence_array.
> > 
> > This turned out to actually be even be quite a bit simpler, since we
> > don't need any complicated dance between RCU and sequence count
> > protected updates any more.
> > 
> > Instead we can just grab a reference to the dma_fence_array under RCU
> > and so keep the current state of synchronization alive until we are done
> > with it.
> > 
> > This results in both a small performance improvement since we don't need
> > so many barriers any more, as well as fewer lines of code in the actual
> > implementation.
> 
> I think you traded lack of barriers/retry loops for correctness here, see
> reply later on. But I haven't grokked the full thing in details, so easily
> might have missed something.
> 
> But high level first, and I don't get this at all. Current state:
> 
> Ill defined semantics, no docs. You have to look at the implementations.
> 
> New state after you patch series:
> 
> Ill defined semantics (but hey different!), no docs. You still have to
> look at the implementations to understand what's going on.
> 
> I think what has actually changed (aside from the entire implementation)
> is just these three things:
> - we now allow multiple exclusive fences

This isn't really new, you could just attach a dma_fence_array already to
the exclusive slot. So not really new either.

> - exclusive was renamed to writer fences, shared to reader fences

Bit more context why I think this is a pure bikeshed: We've had (what at
least felt like) a multi-year bikeshed on what to call these, with the two
options writer/readers and exclusive/shared. Somehow (it's not documented,
hooray) we ended up going with exlusive/shared. Switching that over to the
other bikeshed again, still without documenting what exactly you should be
putting there (since amdgpu still doesn't always fill out the writer,
because that's not how amdgpu works), feels really silly.

> - there's a new "other" group, for ... otherwordly fences?

I guess this is to better handle the amdkfd magic fence, or the vm fences?
Still no idea since not used.

One other thing I've found while trying to figure out your motivation here
(since I'm not getting what you're aiming) is that setting the exclusive
fence through the old interface now sets both exclusive and shared fences.

I guess if that's all (I'm assuming I'm blind) we can just add a "give me
all the fences" interface, and use that for the drivers that want that.

> Afaiui we have the following to issues with the current fence semantics:
> - amdgpu came up with a totally different notion of implicit sync, using
>   the owner to figure out when to sync. I have no idea at all how that
>   meshes with multiple writers, but I guess there's a connection.
> - amdkfd does a very fancy eviction/preempt fence. Is that what the other
>   bucket is for?
> 
> I guess I could read the amdgpu/ttm code in very fine detail and figure
> this out, but I really don't see how that's moving stuff forward.
> 
> Also, I think it'd be really good to decouple semantic changes from
> implementation changes, because untangling them if we have to revert one
> or the other is going to be nigh impossible. And dma_* is not really an
> area where we can proudly claim that reverts don't happen.

I think we should go even further with this, and start earlier.

step 1: Document the current semantics.

Once we have that, we can look at the amdkfd and amdgpu vm stuff and
whatever else there is, and figure out what's missing. Maybe even throw in
the exact thing you're doign in amdkfd/gpu into the above documentation,
in an effort to cover what's done. I can add some entertaining things from
i915's side too :-)

And I mean actual real docs that explain stuff, not oneliner kerneldocs
for functions and that's it. Without that I think we'll just move in
circles and go nowhere at all.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list