[Intel-xe] [PATCH] drm/xe: Fix lockdep warning in xe_force_wake calls

Ville Syrjälä ville.syrjala at linux.intel.com
Fri Nov 24 08:37:29 UTC 2023


On Fri, Nov 24, 2023 at 02:01:27PM +0530, Aravind Iddamsetty wrote:
> 
> On 11/24/23 12:49, Ville Syrjälä wrote:
> > On Fri, Nov 24, 2023 at 12:14:08PM +0530, Aravind Iddamsetty wrote:
> >> Introduce atomic version for xe_force_wake calls which uses spin_lock
> >> while the non atomic version uses spin_lock_irq
> >>
> >> Fix for below:
> >> [13994.811263] ========================================================
> >> [13994.811295] WARNING: possible irq lock inversion dependency detected
> >> [13994.811326] 6.6.0-rc3-xe #2 Tainted: G     U
> >> [13994.811358] --------------------------------------------------------
> >> [13994.811388] swapper/0/0 just changed the state of lock:
> >> [13994.811416] ffff895c7e044db8 (&cpuctx_lock){-...}-{2:2}, at:
> >> __perf_event_read+0xb7/0x3a0
> >> [13994.811494] but this lock took another, HARDIRQ-unsafe lock in the
> >> past:
> >> [13994.811528]  (&fw->lock){+.+.}-{2:2}
> >> [13994.811544]
> >>
> >>                and interrupts could create inverse lock ordering between
> >> them.
> >>
> >> [13994.811606]
> >>                other info that might help us debug this:
> >> [13994.811636]  Possible interrupt unsafe locking scenario:
> >>
> >> [13994.811667]        CPU0                    CPU1
> >> [13994.811691]        ----                    ----
> >> [13994.811715]   lock(&fw->lock);
> >> [13994.811744]                                local_irq_disable();
> >> [13994.811773]                                lock(&cpuctx_lock);
> >> [13994.811810]                                lock(&fw->lock);
> >> [13994.811846]   <Interrupt>
> >> [13994.811865]     lock(&cpuctx_lock);
> >> [13994.811895]
> >>                 *** DEADLOCK ***
> >>
> >> v2: Use spin_lock in atomic context and spin_lock_irq in a non atomic
> >> context (Matthew Brost)
> > No idea what this "atomic context" means, but looks like
> > you just want to use spin_lock_irqsave() & co.
> atomic context: where sleeping is not allowed.

That has nothing to do with your lockdep spew. Also spinlocks don't
sleep by definition (if we ignore the RT spinlock->mutex magic).

> Well that is what I had in
> v1 and Matt suggested we should explicitly know from where we are calling
> force wake and depending on it use spin_lock or spin_lock_irq versions.

Duplicating tons of code for that is silly. I seriously doubt someone
benchmarked this and saw a meaningful improvement from skipping the
save/restore.

> >
> >> Cc: Matthew Brost <matthew.brost at intel.com>
> >> Cc: Anshuman Gupta <anshuman.gupta at intel.com>
> >> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
> >> Signed-off-by: Aravind Iddamsetty <aravind.iddamsetty at linux.intel.com>
> >> ---
> >>  drivers/gpu/drm/xe/xe_force_wake.c | 62 +++++++++++++++++++++++++++++-
> >>  drivers/gpu/drm/xe/xe_force_wake.h |  4 ++
> >>  drivers/gpu/drm/xe/xe_pmu.c        |  4 +-
> >>  3 files changed, 66 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
> >> index 32d6c4dd2807..1693097f72d3 100644
> >> --- a/drivers/gpu/drm/xe/xe_force_wake.c
> >> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
> >> @@ -147,7 +147,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
> >>  	enum xe_force_wake_domains tmp, woken = 0;
> >>  	int ret, ret2 = 0;
> >>  
> >> -	spin_lock(&fw->lock);
> >> +	spin_lock_irq(&fw->lock);
> >>  	for_each_fw_domain_masked(domain, domains, fw, tmp) {
> >>  		if (!domain->ref++) {
> >>  			woken |= BIT(domain->id);
> >> @@ -162,7 +162,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
> >>  				   domain->id, ret);
> >>  	}
> >>  	fw->awake_domains |= woken;
> >> -	spin_unlock(&fw->lock);
> >> +	spin_unlock_irq(&fw->lock);
> >>  
> >>  	return ret2;
> >>  }
> >> @@ -176,6 +176,64 @@ int xe_force_wake_put(struct xe_force_wake *fw,
> >>  	enum xe_force_wake_domains tmp, sleep = 0;
> >>  	int ret, ret2 = 0;
> >>  
> >> +	spin_lock_irq(&fw->lock);
> >> +	for_each_fw_domain_masked(domain, domains, fw, tmp) {
> >> +		if (!--domain->ref) {
> >> +			sleep |= BIT(domain->id);
> >> +			domain_sleep(gt, domain);
> >> +		}
> >> +	}
> >> +	for_each_fw_domain_masked(domain, sleep, fw, tmp) {
> >> +		ret = domain_sleep_wait(gt, domain);
> > Why on earth are we waiting here?
> >
> > Why is this all this stuff called "sleep something"?
> to my knowledge the HW can take sometime to ack the forcewake request

We are *releasing* the forcewake here, not acquiring it.

> that is why we have a wait, regarding the naming it was existing from before
> may be Matt can answer that.
> 
> 
> Thanks,
> Aravind.
> >

-- 
Ville Syrjälä
Intel


More information about the Intel-xe mailing list