[PATCH] drm/xe/pm: Move xe_rpm_lockmap_acquire
Kandpal, Suraj
suraj.kandpal at intel.com
Wed Sep 11 12:01:17 UTC 2024
> -----Original Message-----
> From: Deak, Imre <imre.deak at intel.com>
> Sent: Wednesday, September 11, 2024 5:05 PM
> To: Kandpal, Suraj <suraj.kandpal at intel.com>
> Cc: intel-xe at lists.freedesktop.org; Shankar, Uma <uma.shankar at intel.com>
> Subject: Re: [PATCH] drm/xe/pm: Move xe_rpm_lockmap_acquire
>
> On Wed, Sep 11, 2024 at 03:00:25PM +0530, Suraj Kandpal wrote:
> > Move xe_rpm_lockmap_acquire after display_pm_suspend and resume
> > funtions to avoid cirular locking dependency because of locks being
> > taken in intel_fbdev, intel_dp_mst_mgr suspend and resume functions.
> >
> > Signed-off-by: Suraj Kandpal <suraj.kandpal at intel.com>
>
> The actual problem is that MST is being suspended during runtime suspend. This
> is not required (adding only unnecessary overhead) but also incorrect as it
> involves AUX transfers which itself depends on the device being runtime
> resumed. This is what lockdep is also trying to say.
>
> So the solution would be not to suspend/resume MST during runtime
> suspend/resume.
I think that would also mean the same thing for intel_fb_dev_set_suspend not to suspend it during suspend resume
Where we see
4> [213.826919]
-> #3 (xe_rpm_d3cold_map){+.+.}-{0:0}:
<4> [213.826924] xe_rpm_lockmap_acquire+0x5f/0x70 [xe]
<4> [213.827102] xe_pm_runtime_get+0x59/0x110 [xe]
<4> [213.827270] xe_gem_fault+0x85/0x280 [xe]
<4> [213.827384] __do_fault+0x36/0x140
<4> [213.827391] do_pte_missing+0x68/0xe10
<4> [213.827401] __handle_mm_fault+0x7a6/0xe60
<4> [213.827406] handle_mm_fault+0x12e/0x2a0
<4> [213.827411] do_user_addr_fault+0x366/0x970
<4> [213.827418] exc_page_fault+0x87/0x2b0
<4> [213.827423] asm_exc_page_fault+0x27/0x30
<4> [213.827428]
-> #2 (&mm->mmap_lock){++++}-{3:3}:
<4> [213.827432] __might_fault+0x63/0x90
<4> [213.827435] _copy_to_user+0x23/0x70
<4> [213.827441] tty_ioctl+0x846/0x9a0
<4> [213.827447] __x64_sys_ioctl+0x95/0xd0
<4> [213.827453] x64_sys_call+0x1205/0x20d0
<4> [213.827459] do_syscall_64+0x85/0x140
<4> [213.827464] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<4> [213.827468]
-> #1 (&tty->winsize_mutex){+.+.}-{3:3}:
<4> [213.827471] __mutex_lock+0x9a/0xde0
<4> [213.827476] mutex_lock_nested+0x1b/0x30
<4> [213.827480] tty_do_resize+0x27/0x90
<4> [213.827482] vc_do_resize+0x3ee/0x550
<4> [213.827488] __vc_resize+0x23/0x30
<4> [213.827493] fbcon_do_set_font+0x140/0x2f0
<4> [213.827498] fbcon_set_font+0x30a/0x530
<4> [213.827500] con_font_op+0x284/0x410
<4> [213.827503] vt_ioctl+0x3dd/0x1580
<4> [213.827507] tty_ioctl+0x39e/0x9a0
<4> [213.827510] __x64_sys_ioctl+0x95/0xd0
<4> [213.827515] x64_sys_call+0x1205/0x20d0
<4> [213.827518] do_syscall_64+0x85/0x140
<4> [213.827523] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<4> [213.827526]
-> #0 (console_lock){+.+.}-{0:0}:
<4> [213.827530] __lock_acquire+0x126b/0x26f0
<4> [213.827536] lock_acquire+0xc7/0x2e0
<4> [213.827539] console_lock+0x54/0xa0
<4> [213.827545] intel_fbdev_set_suspend+0x169/0x1f0 [xe]
<4> [213.827729] xe_display_pm_suspend+0x6a/0x260 [xe]
<4> [213.827945] xe_display_pm_runtime_suspend+0x4b/0x70 [xe]
<4> [213.828158] xe_pm_runtime_suspend+0xbc/0x3c0 [xe]
<4> [213.828327] xe_pci_runtime_suspend+0x1f/0xc0 [xe]
<4> [213.828491] pci_pm_runtime_suspend+0x6a/0x1e0
<4> [213.828497] __rpm_callback+0x48/0x120
<4> [213.828505] rpm_callback+0x60/0x70
<4> [213.828509] rpm_suspend+0x124/0x650
<4> [213.828515] rpm_idle+0x237/0x3d0
<4> [213.828520] pm_runtime_work+0x9f/0xd0
<4> [213.828523] process_scheduled_works+0x39f/0x730
<4> [213.828527] worker_thread+0x14f/0x2c0
<4> [213.828529] kthread+0xf5/0x130
<4> [213.828533] ret_from_fork+0x39/0x60
<4> [213.828538] ret_from_fork_asm+0x1a/0x30
<4> [213.828542]
other info that might help us debug this:
<4> [213.828543] Chain exists of:
console_lock --> &mm->mmap_lock --> xe_rpm_d3cold_map
<4> [213.828548] Possible unsafe locking scenario:
<4> [213.828549] CPU0 CPU1
<4> [213.828550] ---- ----
<4> [213.828551] lock(xe_rpm_d3cold_map);
<4> [213.828553] lock(&mm->mmap_lock);
<4> [213.828555] lock(xe_rpm_d3cold_map);
<4> [213.828557] lock(console_lock);
<4> [213.828559]
*** DEADLOCK ***
Regards,
Suraj Kandpal
>
> > ---
> > drivers/gpu/drm/xe/xe_pm.c | 28 ++++++++++++++--------------
> > 1 file changed, 14 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> > index a3d1509066f7..7f33e553728a 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.c
> > +++ b/drivers/gpu/drm/xe/xe_pm.c
> > @@ -363,6 +363,18 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> > /* Disable access_ongoing asserts and prevent recursive pm calls */
> > xe_pm_write_callback_task(xe, current);
> >
> > + /*
> > + * Applying lock for entire list op as xe_ttm_bo_destroy and
> xe_bo_move_notify
> > + * also checks and delets bo entry from user fault list.
> > + */
> > + mutex_lock(&xe->mem_access.vram_userfault.lock);
> > + list_for_each_entry_safe(bo, on,
> > + &xe->mem_access.vram_userfault.list,
> vram_userfault_link)
> > + xe_bo_runtime_pm_release_mmap_offset(bo);
> > + mutex_unlock(&xe->mem_access.vram_userfault.lock);
> > +
> > + xe_display_pm_runtime_suspend(xe);
> > +
> > /*
> > * The actual xe_pm_runtime_put() is always async underneath, so
> > * exactly where that is called should makes no difference to us.
> > However @@ -386,18 +398,6 @@ int xe_pm_runtime_suspend(struct
> xe_device *xe)
> > */
> > xe_rpm_lockmap_acquire(xe);
> >
> > - /*
> > - * Applying lock for entire list op as xe_ttm_bo_destroy and
> xe_bo_move_notify
> > - * also checks and delets bo entry from user fault list.
> > - */
> > - mutex_lock(&xe->mem_access.vram_userfault.lock);
> > - list_for_each_entry_safe(bo, on,
> > - &xe->mem_access.vram_userfault.list,
> vram_userfault_link)
> > - xe_bo_runtime_pm_release_mmap_offset(bo);
> > - mutex_unlock(&xe->mem_access.vram_userfault.lock);
> > -
> > - xe_display_pm_runtime_suspend(xe);
> > -
> > if (xe->d3cold.allowed) {
> > err = xe_bo_evict_all(xe);
> > if (err)
> > @@ -438,8 +438,6 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> > /* Disable access_ongoing asserts and prevent recursive pm calls */
> > xe_pm_write_callback_task(xe, current);
> >
> > - xe_rpm_lockmap_acquire(xe);
> > -
> > if (xe->d3cold.allowed) {
> > err = xe_pcode_ready(xe, true);
> > if (err)
> > @@ -463,6 +461,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> >
> > xe_display_pm_runtime_resume(xe);
> >
> > + xe_rpm_lockmap_acquire(xe);
> > +
> > if (xe->d3cold.allowed) {
> > err = xe_bo_restore_user(xe);
> > if (err)
> > --
> > 2.43.2
> >
More information about the Intel-xe
mailing list