[PATCH 2/2] dma-buf: cleanup shared fence removal

Daniel Vetter daniel at ffwll.ch
Fri Jun 28 14:38:39 UTC 2019


On Fri, Jun 28, 2019 at 12:24 PM Koenig, Christian
<Christian.Koenig at amd.com> wrote:
>
> Am 28.06.19 um 11:41 schrieb Daniel Vetter:
> > On Fri, Jun 28, 2019 at 10:40 AM Christian König
> > <ckoenig.leichtzumerken at gmail.com> wrote:
> >> Am 28.06.19 um 09:30 schrieb Daniel Vetter:
> >>> On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
> >>> <Christian.Koenig at amd.com> wrote:
> >>>> Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> >>>>> [SNIP]
> >>> Well yeah you have to wait for outstanding rendering. Everyone does
> >>> that. The problem is that ->eviction_fence does not represent
> >>> rendering, it's a very contrived way to implement the ->notify_move
> >>> callback.
> >>>
> >>> For real fences you have to wait for rendering to complete, putting
> >>> aside corner cases like destroying an entire context with all its
> >>> pending rendering. Not really something we should optimize for I
> >>> think.
> >> No, real fences I don't need to wait for the rendering to complete either.
> >>
> >> If userspace said that this per process resource is dead and can be
> >> removed, we don't need to wait for it to become idle.
> > But why did you attach a fence in the first place?
>
> Because so that others can still wait for it.
>
> See it is perfectly valid to export this buffer object to let's say the
> Intel driver and in this case I don't get a move notification.

If you share with intel then the buffer is pinned. You can't move the
thing anymore.

> And I really don't want to add another workaround where I add the fences
> only when the BO is exported or stuff like that.

You don't need to add a fence for that case because you can't move the
buffer anyway.

> >> [SNIP]
> >> As far as I can see it is perfectly valid to remove all fences from this
> >> process as soon as the page tables are up to date.
> > I'm not objecting the design, but just the implementation.
>
> Ok, then we can at least agree on that.
>
> >>> So with the magic amdkfd_eviction fence I agree this makes sense. The
> >>> problem I'm having here is that the magic eviction fence itself
> >>> doesn't make sense. What I expect will happen (in terms of the new
> >>> dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
> >>> those concepts).
> >>>
> >>> - when you submit command buffers, you _dont_ attach fences to all
> >>> involved buffers
> >> That's not going to work because then the memory management then thinks
> >> that the buffer is immediately movable, which it isn't,
> > I guess we need to fix that then. I pretty much assumed that
> > ->notify_move could add whatever fences you might want to add. Which
> > would very neatly allow us to solve this problem here, instead of
> > coming up with fake fences and fun stuff like that.
>
> Adding the fence later on is not a solution because we need something
> which beforehand can check if a buffer is movable or not.
>
> In the case of a move_notify the decision to move it is already done and
> you can't say oh sorry I have to evict my process and reprogram the
> hardware or whatever.
>
> Especially when you do this in an OOM situation.

Why? I mean when the fence for a CS is there already, it might also
still hang out in the scheduler, or be blocked on a fence from another
driver, or anything like that. I don't see a conceptual difference.
Plus with dynamic dma-buf the entire point is that an attached fences
does _not_ mean the buffer is permanently pinned, but can be moved if
you sync correctly. Might need a bit of tuning or a flag to indicate
that some buffers should alwasy considered to be busy, and that you
shouldn't start evicting those. But that's kinda a detail.

>From a very high level there's really no difference betwen
->notify_move and the eviction_fence. Both give you a callback when
someone else needs to move the buffer, that's all. The only difference
is that the eviction_fence thing jumbles the callback and the fence
into one, by preattaching a fence just in case. But again from a
conceptual pov it doesn't matter whether the fence is always hanging
around, or whether you just attach it when ->notify_move is called.

> > If ->notify_move can't add fences, then the you have to attach the
> > right fences to all of the bo, all the time. And a CS model where
> > userspace just updates the working set and keeps submitting stuff,
> > while submitting new batches. And the kernel preempts the entire
> > context if memory needs to be evicted, and re-runs it only once the
> > working set is available again.
> >
> > No the eviction_fence is not a good design solution for this, and imo
> > you should have rejected that.
>
> Actually I was the one who suggested that because the alternatives
> doesn't sounded like they would work.
>
> [SNIP]
> >> See ttm_bo_individualize_resv() as well, here we do something similar
> >> for GFX what the KFD does when it releases memory.
> > Uh wut. I guess more funky tricks I need to first learn about, but
> > yeah doesn't make much sense ot me right now.
> >
> >> E.g. for per process resources we copy over the current fences into an
> >> individualized reservation object, to make sure that this can be freed
> >> up at some time in the future.
> > Why are you deleteing an object where others outside of your driver
> > can still add fences? Just from this description this doesn't make
> > sense to me ...
>
> Multiple BOs share a single reservation object. This is used for example
> for page tables.
>
> To map 16GB or memory I can easily have more than 8k BOs for the page
> tables.
>
> So what we did was to use reservation object of the root page table for
> all other BOs of the VM as well.
>
> Otherwise you would need to add a fence to 8k reservation objects and
> that is not really feasible.
>
> That works fine, except for the case when you want to free up a page
> table. In this case we individualize the BO, copy the fences over and
> say ok we free that up when the current set of fences is signaled.

As long as you dont' share these buffers I think it doesn't matter
what you do. Plus there's tons of other options to implement this
instead of copying the resv_obj, since at that point all that resv_obj
serves as is a list of fences. Just creating a list of fences (and
filtering only the ones valid for your gpu, because the others don't
matter) seems much simpler, but probably more work to refactor ttm to
suit.

Making this official otoh is just calling for trouble, anytime else
than when you wait for object destruction replacing the resv_obj is
not a good idea.

Also: If ttm wouldn't be such a midlayer you could just extend the
eviction code to eat pagetables with a bit of dedicated code and
wouldn't need to track every pagetable with a full blown ttm_bo. That
approach frankly sounds terrible to me.

> >> But I really want to go a step further and say ok, all fences from this
> >> process can be removed after we updated the page tables.
> >>
> >>> No where do you need to remove a fence, because you never attached a
> >>> bogus fence.
> >>>
> >>> Now with the magic eviction trick amdkfd uses, you can't do that,
> >>> because you need to attach this magic fence to all buffers, all the
> >>> time. And since it still needs to work like a fence it's one-shot
> >>> only, i.e. instead of a reuseable ->notify_move callback you need to
> >>> create a new fence every time ->enable_signalling is called. So in a
> >>> way replacing fences is just an artifact of some very, very crazy
> >>> calling convention.
> >>>
> >>> If you have a real callback, there's no need for cycling through
> >>> fences, and therefore there's also no need to optimize their removal.
> >>>
> >>> Or did you work under the assumption that ->notify_move cannot attach
> >>> new fences, and therefore you'd have to roll this magic fence trick to
> >>> even more places?
> >> Well that notify_move approach was what was initially suggested by the
> >> KFD team as well. The problem is simply that there is no general
> >> notify_move callback for buffers yet.
> >>
> >> Adding that one is exactly what my patches for dynamic DMA-buf are
> >> currently doing :)
> > Ok, so we agree at least for the amdkfd that the ->notify_move is the
> > right solution here.
>
> Actually I don't.
>
> If we don't add some fence to the reservation object the memory
> management has no chance of knowing that this object is used by somebody.

So what else is that fancy eviction fence used for, aside from
->notify_move and making sure there's a fence already attached,
instead of allowing a fence to be attached as part of the eviction
processe?. Which really shouldn't be a problem, you already _have_ to
be able to cope with that with pipeline bo moves out of vram anyway.
So more reasons to add fences while evicting a bo shouldn't pose a
problem.

At least right now all I'm seeing is ->notify_move implemented in the
most backwards and clever (but not the good kind of clever) way
possible.
-Daniel

>
> Regards,
> Christian.
>
> >   Imo lets get that done and remove eviction_fence,
> > instead of promoting that design pattern even more.
> >
> >>>>> Now I guess I understand the mechanics of this somewhat, and what
> >>>>> you're doing, and lit ooks even somewhat safe. But I have no idea what
> >>>>> this is supposed to achieve. It feels a bit like ->notify_move, but
> >>>>> implemented in the most horrible way possible. Or maybe something
> >>>>> else.
> >>>>>
> >>>>> Really no idea.
> >>>>>
> >>>>> And given that we've wasted I few pages full of paragraphs already on
> >>>>> trying to explain what your new little helper is for, when it's safe
> >>>>> to use, when it's maybe not a good idea, and we still haven't even
> >>>>> bottomed out on what this is for ... well I really don't think it's a
> >>>>> good idea to inflict this into core code. Because just blindly
> >>>>> removing normal fences is not safe.
> >>>>>
> >>>>> Especially with like half a sentence of kerneldoc that explains
> >>>>> nothing of all this complexity.
> >>> Still makes no sense to me to have in core code.
> >> Well it is still better to have this in the core code than messing with
> >> the reservation object internals in the driver code.
> > Uh no. Imo if you do really funky stuff and abuse fences to get a
> > one-shot callback, then you get to keep the cost of maintaining that.
> > You _are_ already fundamentally tearing down the abstraction
> > dma_fence/resv_obj is supposed to provide. I mean you're essentially
> > relying on the exporter calling ->enable_signalling (of the magic
> > fence) at roughly the spot where it would call ->notify_move. All
> > without that being documented or clarified as how it's supposed to be
> > done. Accessing a few private fields is the least offense here :-)
> > -Daniel
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the amd-gfx mailing list