[RFC] Add DMA_RESV_USAGE flags

Jason Ekstrand jason at jlekstrand.net
Mon May 17 20:15:11 UTC 2021


On Mon, May 17, 2021 at 2:38 PM Christian König
<ckoenig.leichtzumerken at gmail.com> wrote:
>
> Am 17.05.21 um 17:04 schrieb Daniel Vetter:
> > On Mon, May 17, 2021 at 04:11:18PM +0200, Christian König wrote:
> >> We had a long outstanding problem in amdgpu that buffers exported to
> >> user drivers by DMA-buf serialize all command submissions using them.
> >>
> >> In other words we can't compose the buffer with different engines and
> >> then send it to another driver for display further processing.
> >>
> >> This was added to work around the fact that i915 didn't wanted to wait
> >> for shared fences in the dma_resv objects before displaying a buffer.
> >>
> >> Since this problem is now causing issues with Vulkan we need to find a
> >> better solution for that.
> >>
> >> The patch set here tries to do this by adding an usage flag to the
> >> shared fences noting when and how they should participate in implicit
> >> synchronization.
> > So the way this is fixed in every other vulkan driver is that vulkan
> > userspace sets flags in the CS ioctl when it wants to synchronize with
> > implicit sync. This gets you mostly there. Last time I checked amdgpu
> > isn't doing this, and yes that's broken.
>
> And exactly that is a really bad approach as far as I can see. The
> Vulkan stack on top simply doesn't know when to set this flag during CS.

Yes and no...  First off, I should preface this by saying that there
are no truly good solutions here that I'm aware of.  We've got one
thing in ANV that mostly works, RADV does something else that mostly
works.  Neither is great.  I think with
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 we get
most of the way to a half-decent solution.

What ANV does today is to set EXEC_OBJECT_ASYNC on all BOs almost all
the time to disable implicit sync.  There are two exceptions to this:

 1. When we go to kick a buffer off to the window system, we do a
(possibly dummy) submit which takes in whatever explicit VkSemaphore
objects the client provides and uses a mesa-internal vkQueueSubmit
extension to mark the window-system buffer as being written by that
submit.
 2a. In vkAcquireNextImage, when we hand the next back buffer to the
client, it also provides a VkSemaphore.  We stuff the image BO into
said semaphore and, whenever it's used as a wait dependency by the
client, we throw the BO in that submit's object list and flag it as
being read.

This works pretty well.  The biggest problem is that there is some
potential for over-sync thanks to 2a because, as soon as the client
throws in a render which depends on the acquire semaphore, that BO is
marked as used and it can end up over-syncing on itself.  This is
entirely solved by the patch Daniel mentioned below.  With the patch
daniel mentioned and MR4037 linked above, 2 gets replaced with

2b. In vkAcquireNextImage, we fetch a sync_file out of the dma-buf and
stuff it into the semaphore.

This totally solves the over-sync problem because the sync_file used
represents the exact point on the CPU timeline where X handed the
image back to us and any fences added after that point don't count.
Since the client can't do a vkQueueSubmit using that image until
vkAcquireNextImage returns, there's no way it will contain any of our
fences unless they were from the previous frame rendered to that
image.

> That's also the reason the Valve guys came up with a solution where each
> BO gets a flag for explicit sync, but that only works for exports and
> not for imports.

Yes, RADV uses a different mechanism.  They have a per-BO flag for "am
I owned" and they sync on all the external+owned stuff.  The owned
flag is flipped on and off when buffers are handed between the client
and the compositor.

Which is better?  I think the final solution I'm driving towards in
ANV is the best we can do at the moment.  I also like the fact that it
is pretty cross-driver.  It does still depend on a dummy submit but
I'm not sure it's worth trying to get around that for now.

> > I915 and iirc msm has explicit flags for this, panfrost was designed to
> > support this correctly from the start (also with flags I think). That's at
> > least what I remember from all the discussions at XDC and #dri-devel, but
> > didn't check the code again to give you the list of uapi flags you need
> > for each driver.
> >
> > The other piece is making sure you're only picking up implicit fences when
> > you should, and not any later ones, for which Jason has a solution:
> >
> > https://lore.kernel.org/dri-devel/20210317221940.2146688-1-jason@jlekstrand.net/
>
> Yes, I helped with that as well. But I think that this is just another
> workaround without really addressing the underlying problem.

I think that depends on how we're defining "the underlying problem"
I'm afraid I'm a bit unclear on that.

> > If amdgpu isn't using those, then you will suffer from
> > over-synchronization in vulkan and pay a price. The entire point of vulkan
> > is that you pick up sync points very explicitly, and we also need to have
> > very explicit uapi for userspace to pick up/set the implicit fences.
> >
> > Trying to paper over this with more implicit magic is imo just wrong, and
> > definitely not the long term explicit sync model we want.
>
> I completely disagree.
>
> In my opinion the implicit sync model we have for dma_resv currently is
> just not well designed at all, since it always requires cooperation from
> userspace.
>
> In other words you need to know when to enable implicit sync in
> userspace and that information is simply not present all of the time.

Then I don't see how this helps.  If userspace doesn't know when to
sync so it can set it at submit time, why would it know on
import/export?  If you think we don't know when to sync, we certainly
don't know when it's read vs. write.

Also, making this all happen at import/export time totally blows away
Vulkan's model.  We don't want to have to destroy the BO object to
stop syncing on it and re-import it every frame.  That's a
non-starter.

Or am I really badly missing something?

> What we have done here is just keeping the old reader/writer flags i915,
> radeon and nouveau once had and pushed that out to everybody else making
> the assumption that everybody would follow that without documenting the
> actual rules of engagement you need to follow here.
>
> That was a really big mistake and we should try to fix that sooner or
> later. The only other clean alternative I see is to use a flag on the
> exporter to tell the importer if it should sync to shared fences or not.
>
> Additional to that I'm perfectly fine with implicit sync. Explicit sync
> certainly has some use cases as well, but I don't see it as an absolute
> advantage over the implicit model.

I'm not going to make any broad sweeping arguments about the future
here.  Implicit sync may be here to stay and I think that discussion
is sort-of beside the point anyway.

--Jason


More information about the dri-devel mailing list