Improve reservation object shared slot function

Daniel Vetter daniel at ffwll.ch
Mon Sep 24 15:43:19 UTC 2018


On Mon, Sep 24, 2018 at 05:16:50PM +0200, Christian König wrote:
> Am 24.09.2018 um 17:03 schrieb Daniel Vetter:
> > On Mon, Sep 24, 2018 at 01:58:12PM +0200, Christian König wrote:
> > > The reservation object shared slot function only allowed to reserve one slot at a time.
> > > 
> > > Improve that and allow to reserve multiple slots to support atomically submission to multiple engines.
> > I think you can do this already, just don't drop the ww_mutex lock. And I
> > also think that invariant still holds, if you drop the ww_mutex lock your
> > fence slot reservation evaporates. Your new code is just a bit more
> > convenient.
> 
> The problem is that allocating a slot could in theory fail with -ENOMEM.
> 
> And at the point we add the fence the hardware is already using the memory,
> so failure is not on option.
> 
> Key feature is that we are getting submissions to multiple engines which
> either needs to be submitted together or not at all.
> 
> > Could we check/enforce this somehow when WW_MUTEX debugging is enabled?
> > E.g. store the ww_mutex ctx in the reservation (we can dig it out from
> > under the lock), and then check that the lock holder/ctx hasn't changed
> > when adding all the fences? I think with multiple fences getting added
> > atomically some more debug checks here would be good.
> 
> Yeah, I was already thinking about something similar.
> 
> We wouldn't need to remember the context or anything, but rather just set
> shared_max=shared_count in reservation_object_unlock().
> 
> > Another one: Do we want to insist that you either add all the fences, or
> > none, to make this fully atomic? We could check this when unlocking the
> > reservation. Kinda hard to guess what your exact use-case here is.
> 
> No, some fence (like VM updates) are perfectly optional.

Ah right, vm moves might already be underway (even if you end up throwing
all the actual CS stuff from userspace away), or might not happen.

> > All this would ofc be compiled out for !WW_MUTEX_DEBUG kernels.
> 
> Those extra checks sounds like a good idea to me, going to add that.

Yeah that'd be great, just figured "this has lots of potential for nasty
to debug bugs". The patches themselves look good to me, at least from
skimming.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list