[Intel-xe] [PATCH] drm/xe: Fix lockdep warning in xe_force_wake calls
Ville Syrjälä
ville.syrjala at linux.intel.com
Fri Nov 24 07:19:15 UTC 2023
On Fri, Nov 24, 2023 at 12:14:08PM +0530, Aravind Iddamsetty wrote:
> Introduce atomic version for xe_force_wake calls which uses spin_lock
> while the non atomic version uses spin_lock_irq
>
> Fix for below:
> [13994.811263] ========================================================
> [13994.811295] WARNING: possible irq lock inversion dependency detected
> [13994.811326] 6.6.0-rc3-xe #2 Tainted: G U
> [13994.811358] --------------------------------------------------------
> [13994.811388] swapper/0/0 just changed the state of lock:
> [13994.811416] ffff895c7e044db8 (&cpuctx_lock){-...}-{2:2}, at:
> __perf_event_read+0xb7/0x3a0
> [13994.811494] but this lock took another, HARDIRQ-unsafe lock in the
> past:
> [13994.811528] (&fw->lock){+.+.}-{2:2}
> [13994.811544]
>
> and interrupts could create inverse lock ordering between
> them.
>
> [13994.811606]
> other info that might help us debug this:
> [13994.811636] Possible interrupt unsafe locking scenario:
>
> [13994.811667] CPU0 CPU1
> [13994.811691] ---- ----
> [13994.811715] lock(&fw->lock);
> [13994.811744] local_irq_disable();
> [13994.811773] lock(&cpuctx_lock);
> [13994.811810] lock(&fw->lock);
> [13994.811846] <Interrupt>
> [13994.811865] lock(&cpuctx_lock);
> [13994.811895]
> *** DEADLOCK ***
>
> v2: Use spin_lock in atomic context and spin_lock_irq in a non atomic
> context (Matthew Brost)
No idea what this "atomic context" means, but looks like
you just want to use spin_lock_irqsave() & co.
>
> Cc: Matthew Brost <matthew.brost at intel.com>
> Cc: Anshuman Gupta <anshuman.gupta at intel.com>
> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
> Signed-off-by: Aravind Iddamsetty <aravind.iddamsetty at linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_force_wake.c | 62 +++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_force_wake.h | 4 ++
> drivers/gpu/drm/xe/xe_pmu.c | 4 +-
> 3 files changed, 66 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
> index 32d6c4dd2807..1693097f72d3 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.c
> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
> @@ -147,7 +147,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
> enum xe_force_wake_domains tmp, woken = 0;
> int ret, ret2 = 0;
>
> - spin_lock(&fw->lock);
> + spin_lock_irq(&fw->lock);
> for_each_fw_domain_masked(domain, domains, fw, tmp) {
> if (!domain->ref++) {
> woken |= BIT(domain->id);
> @@ -162,7 +162,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
> domain->id, ret);
> }
> fw->awake_domains |= woken;
> - spin_unlock(&fw->lock);
> + spin_unlock_irq(&fw->lock);
>
> return ret2;
> }
> @@ -176,6 +176,64 @@ int xe_force_wake_put(struct xe_force_wake *fw,
> enum xe_force_wake_domains tmp, sleep = 0;
> int ret, ret2 = 0;
>
> + spin_lock_irq(&fw->lock);
> + for_each_fw_domain_masked(domain, domains, fw, tmp) {
> + if (!--domain->ref) {
> + sleep |= BIT(domain->id);
> + domain_sleep(gt, domain);
> + }
> + }
> + for_each_fw_domain_masked(domain, sleep, fw, tmp) {
> + ret = domain_sleep_wait(gt, domain);
Why on earth are we waiting here?
Why is this all this stuff called "sleep something"?
--
Ville Syrjälä
Intel
More information about the Intel-xe
mailing list