[PATCH 2/2] dma-buf: cleanup shared fence removal

Daniel Vetter daniel at ffwll.ch
Fri Jun 28 16:40:44 UTC 2019


On Fri, Jun 28, 2019 at 5:21 PM Koenig, Christian
<Christian.Koenig at amd.com> wrote:
>
> Am 28.06.19 um 16:38 schrieb Daniel Vetter:
> [SNIP]
> >>>>> - when you submit command buffers, you _dont_ attach fences to all
> >>>>> involved buffers
> >>>> That's not going to work because then the memory management then thinks
> >>>> that the buffer is immediately movable, which it isn't,
> >>> I guess we need to fix that then. I pretty much assumed that
> >>> ->notify_move could add whatever fences you might want to add. Which
> >>> would very neatly allow us to solve this problem here, instead of
> >>> coming up with fake fences and fun stuff like that.
> >> Adding the fence later on is not a solution because we need something
> >> which beforehand can check if a buffer is movable or not.
> >>
> >> In the case of a move_notify the decision to move it is already done and
> >> you can't say oh sorry I have to evict my process and reprogram the
> >> hardware or whatever.
> >>
> >> Especially when you do this in an OOM situation.
> > Why? I mean when the fence for a CS is there already, it might also
> > still hang out in the scheduler, or be blocked on a fence from another
> > driver, or anything like that. I don't see a conceptual difference.
> > Plus with dynamic dma-buf the entire point is that an attached fences
> > does _not_ mean the buffer is permanently pinned, but can be moved if
> > you sync correctly. Might need a bit of tuning or a flag to indicate
> > that some buffers should alwasy considered to be busy, and that you
> > shouldn't start evicting those. But that's kinda a detail.
> >
> >  From a very high level there's really no difference betwen
> > ->notify_move and the eviction_fence. Both give you a callback when
> > someone else needs to move the buffer, that's all. The only difference
> > is that the eviction_fence thing jumbles the callback and the fence
> > into one, by preattaching a fence just in case. But again from a
> > conceptual pov it doesn't matter whether the fence is always hanging
> > around, or whether you just attach it when ->notify_move is called.
>
> Sure there is a difference. See when you attach the fence beforehand the
> memory management can know that the buffer is busy.
>
> Just imagine the following: We are in an OOM situation and need to swap
> things out to disk!
>
> When the fence is attached beforehand the handling can be as following:
> 1. MM picks a BO from the LRU and starts to evict it.
> 2. The eviction fence is enabled and we stop the process using this BO.
> 3. As soon as the process is stopped the fence is set into the signaled
> state.
> 4. MM needs to evict more BOs and since the fence for this process is
> now in the signaled state it can intentionally pick the ones up which
> are now idle.
>
> When we attach the fence only on eviction that can't happen and the MM
> would just pick the next random BO and potentially stop another process.
>
> So I think we can summarize that the memory management definitely needs
> to know beforehand how costly it is to evict a BO.
>
> And of course implement this with flags or use counters or whatever, but
> we already have the fence infrastructure and I don't see a reason not to
> use it.

Ok, for the sake of the argument let's buy this.

Why do we need a ->notify_move callback then? We have it already, with
these special fences.

Other side: If all you want to know is whether you can unmap a buffer
immediately, for some short enough value of immediately (I guess a
bunch of pagetable writes should be ok), then why not add that? The "I
don't want to touch all buffers for every CS, but just have a pinned
working set" command submission model is quite different after all,
having dedicated infrastructure that fits well sounds like a good idea
to me.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the amd-gfx mailing list