[PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes

Nicolai Hähnle nicolai.haehnle at amd.com
Thu Nov 24 12:05:01 UTC 2016


On 24.11.2016 12:56, Peter Zijlstra wrote:
> On Thu, Nov 24, 2016 at 12:52:25PM +0100, Daniel Vetter wrote:
>> On Thu, Nov 24, 2016 at 12:40 PM, Peter Zijlstra <peterz at infradead.org> wrote:
>>>
>>>> I do believe we can win a bit by keeping the wait list sorted, if we also
>>>> make sure that waiters don't add themselves in the first place if they see
>>>> that a deadlock situation cannot be avoided.
>>>>
>>>> I will probably want to extend struct mutex_waiter with ww_mutex-specific
>>>> fields to facilitate this (i.e. ctx pointer, perhaps stamp as well to reduce
>>>> pointer-chasing). That should be fine since it lives on the stack.
>>>
>>> Right, shouldn't be a problem I think.
>>>
>>> The only 'problem' I can see with using that is that its possible to mix
>>> ww and !ww waiters through ww_mutex_lock(.ctx = NULL). This makes the
>>> list order somewhat tricky.
>>>
>>> Ideally we'd remove that feature, although I see its actually used quite
>>> a bit :/
>>
>> I guess we could create a small fake acquire_ctx for single-lock
>> paths. That way callers still don't need to deal with having an
>> explicit ctx, but we can assume the timestamp (for ensuring fairness)
>> is available for all cases. Otherwise there's indeed a problem with
>> correctly (well fairly) interleaving ctx and non-ctx lockers I think.
>
> Actually tried that, but we need a ww_class to get a stamp from, and
> ww_mutex_lock() doesn't have one of those..

The acquire context needs to be live until the unlock anyway, so this is 
something that requires modifying the callers of ww_mutex_lock. Those 
should all have a ww_class available, or something is very wrong :)

Nicolai


More information about the dri-devel mailing list