[PATCH 1/4] locking/ww_mutex: add ww_mutex_is_owned_by function v3

Peter Zijlstra peterz at infradead.org
Tue Feb 20 13:57:09 UTC 2018


On Tue, Feb 20, 2018 at 02:26:55PM +0100, Christian König wrote:
> > > +static inline bool ww_mutex_is_owned_by(struct ww_mutex *lock,
> > > +					struct ww_acquire_ctx *ctx)
> > > +{
> > > +	if (ctx)
> > > +		return likely(READ_ONCE(lock->ctx) == ctx);
> > > +	else
> > > +		return likely(__mutex_owner(&lock->base) == current);
> > > +}
> > Much better than the previous version. If you want to bike-shed, you can
> > leave out the 'else' and unindent the last line.
> 
> Thanks for the suggestion, going to do this.

You might also want likely(ctx), since ww_mutex without ctx is
a-typical I would think.

> > I do worry about potential users of .ctx = NULL, though. It makes it far
> > too easy to do recursive locking, which is something we should strongly
> > discourage.
> 
> Well, one of the addressed use cases is indeed checking for recursive
> locking. But recursive locking is something rather normal for ww_mutex and
> we are just exercising an existing code path.

But that would be the ctx case, right? I'm not sure there is a lot of
!ctx use out there, and in that case it really is rather like a normal
mutex.

> E.g. the most common use case for the ww_mutex is in the graphics drivers
> where usespace sends us a list of buffer objects to work with.
> 
> Now when userspace sends us duplicates in that buffer list the expectation
> is to get -EALREADY from ww_mutex_lock when we try to lock the same ww_mutex
> twice.

Right, I remember that much.. :-)

> The intention behind this function is now to a) be able to extend those
> checks to make sure user space doesn't sends us potentially harmful nonsense
> and b) allow to check for recursion in TTM during buffer object eviction
> which uses ww_mutex_trylock instead of ww_mutex_lock.

OK, but neither case would in fact need the !ctx case right? That's just
there for completeness sake?

But yes, I cannot think of a better fallback there either.



More information about the amd-gfx mailing list