[Intel-gfx] [PATCH] mutex: Report recursive ww_mutex locking early

Maarten Lankhorst maarten.lankhorst at linux.intel.com
Mon May 30 07:43:53 UTC 2016


Op 26-05-16 om 22:08 schreef Chris Wilson:
> Recursive locking for ww_mutexes was originally conceived as an
> exception. However, it is heavily used by the DRM atomic modesetting
> code. Currently, the recursive deadlock is checked after we have queued
> up for a busy-spin and as we never release the lock, we spin until
> kicked, whereupon the deadlock is discovered and reported.
>
> A simple solution for the now common problem is to move the recursive
> deadlock discovery to the first action when taking the ww_mutex.
>
> Testcase: igt/kms_cursor_legacy
> Suggested-by: Maarten Lankhorst <maarten.lankhorst at linux.intel.com>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Peter Zijlstra <peterz at infradead.org>
> Cc: Ingo Molnar <mingo at redhat.com>
> Cc: Christian König <christian.koenig at amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst at linux.intel.com>
> Cc: linux-kernel at vger.kernel.org
> ---
>
> Maarten suggested this as a simpler fix to the immediate problem. Imo,
> we still want to perform deadlock detection within the spin in order to
> catch more complicated deadlocks without osq_lock() forcing fairness!
Reviewed-by: Maarten Lankhorst <maarten.lankhorst at linux.intel.com>

Should this be Cc: stable at vger.kernel.org ?

I think in the normal case things would move forward even with osq_lock,
but you can make a separate patch to add it to mutex_can_spin_on_owner,
with the same comment as in mutex_optimistic_spin.
> ---
>  kernel/locking/mutex.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index d60f1ba3e64f..1659398dc8f8 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -502,9 +502,6 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
>  	if (!hold_ctx)
>  		return 0;
>  
> -	if (unlikely(ctx == hold_ctx))
> -		return -EALREADY;
> -
>  	if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
>  	    (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
>  #ifdef CONFIG_DEBUG_MUTEXES
> @@ -530,6 +527,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  	unsigned long flags;
>  	int ret;
>  
> +	if (use_ww_ctx) {
> +		struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
> +		if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
> +			return -EALREADY;
> +	}
> +
>  	preempt_disable();
>  	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
>  




More information about the Intel-gfx mailing list