Lockdep spalt on killing a processes

Andrey Grodzovsky andrey.grodzovsky at amd.com
Mon Oct 4 15:27:17 UTC 2021


I see my confusion, we hang all unsubmitted jobs on the last submitted 
to HW job.
Yea, in this case indeed rescheduling to a different thread context will 
avoid the splat but
the schedule work cannot be done for each dependency signalling but 
rather they way we do
for ttm_bo_delayed_delete with a list of dependencies to signal. 
Otherwise some of the schedule
work will be drop because previous invocation is still pending execution.

Andrey

On 2021-10-04 4:14 a.m., Christian König wrote:
> The problem is a bit different.
>
> The callback is on the dependent fence, while we need to signal the 
> scheduler fence.
>
> Daniel is right that this needs an irq_work struct to handle this 
> properly.
>
> Christian.
>
> Am 01.10.21 um 17:10 schrieb Andrey Grodzovsky:
>> From what I see here you supposed to have actual deadlock and not 
>> only warning, sched_fence->finished is  first signaled from within
>> hw fence done callback (drm_sched_job_done_cb) but then again from 
>> within it's own callback (drm_sched_entity_kill_jobs_cb) and so
>> looks like same fence  object is recursively signaled twice. This 
>> leads to attempt to lock fence->lock second time while it's already
>> locked. I don't see a need to call drm_sched_fence_finished from 
>> within drm_sched_entity_kill_jobs_cb as this callback already registered
>> on sched_fence->finished fence (entity->last_scheduled == 
>> s_fence->finished) and hence the signaling already took place.
>>
>> Andrey
>>
>> On 2021-10-01 6:50 a.m., Christian König wrote:
>>> Hey, Andrey.
>>>
>>> while investigating some memory management problems I've got the 
>>> logdep splat below.
>>>
>>> Looks like something is wrong with drm_sched_entity_kill_jobs_cb(), 
>>> can you investigate?
>>>
>>> Thanks,
>>> Christian.
>>>
>>> [11176.741052] ============================================
>>> [11176.741056] WARNING: possible recursive locking detected
>>> [11176.741060] 5.15.0-rc1-00031-g9d546d600800 #171 Not tainted
>>> [11176.741066] --------------------------------------------
>>> [11176.741070] swapper/12/0 is trying to acquire lock:
>>> [11176.741074] ffff9c337ed175a8 (&fence->lock){-.-.}-{3:3}, at: 
>>> dma_fence_signal+0x28/0x80
>>> [11176.741088]
>>>                but task is already holding lock:
>>> [11176.741092] ffff9c337ed172a8 (&fence->lock){-.-.}-{3:3}, at: 
>>> dma_fence_signal+0x28/0x80
>>> [11176.741100]
>>>                other info that might help us debug this:
>>> [11176.741104]  Possible unsafe locking scenario:
>>>
>>> [11176.741108]        CPU0
>>> [11176.741110]        ----
>>> [11176.741113]   lock(&fence->lock);
>>> [11176.741118]   lock(&fence->lock);
>>> [11176.741122]
>>>                 *** DEADLOCK ***
>>>
>>> [11176.741125]  May be due to missing lock nesting notation
>>>
>>> [11176.741128] 2 locks held by swapper/12/0:
>>> [11176.741133]  #0: ffff9c339c30f768 
>>> (&ring->fence_drv.lock){-.-.}-{3:3}, at: dma_fence_signal+0x28/0x80
>>> [11176.741142]  #1: ffff9c337ed172a8 (&fence->lock){-.-.}-{3:3}, at: 
>>> dma_fence_signal+0x28/0x80
>>> [11176.741151]
>>>                stack backtrace:
>>> [11176.741155] CPU: 12 PID: 0 Comm: swapper/12 Not tainted 
>>> 5.15.0-rc1-00031-g9d546d600800 #171
>>> [11176.741160] Hardware name: System manufacturer System Product 
>>> Name/PRIME X399-A, BIOS 0808 10/12/2018
>>> [11176.741165] Call Trace:
>>> [11176.741169]  <IRQ>
>>> [11176.741173]  dump_stack_lvl+0x5b/0x74
>>> [11176.741181]  dump_stack+0x10/0x12
>>> [11176.741186]  __lock_acquire.cold+0x208/0x2df
>>> [11176.741197]  lock_acquire+0xc6/0x2d0
>>> [11176.741204]  ? dma_fence_signal+0x28/0x80
>>> [11176.741212]  _raw_spin_lock_irqsave+0x4d/0x70
>>> [11176.741219]  ? dma_fence_signal+0x28/0x80
>>> [11176.741225]  dma_fence_signal+0x28/0x80
>>> [11176.741230]  drm_sched_fence_finished+0x12/0x20 [gpu_sched]
>>> [11176.741240]  drm_sched_entity_kill_jobs_cb+0x1c/0x50 [gpu_sched]
>>> [11176.741248]  dma_fence_signal_timestamp_locked+0xac/0x1a0
>>> [11176.741254]  dma_fence_signal+0x3b/0x80
>>> [11176.741260]  drm_sched_fence_finished+0x12/0x20 [gpu_sched]
>>> [11176.741268]  drm_sched_job_done.isra.0+0x7f/0x1a0 [gpu_sched]
>>> [11176.741277]  drm_sched_job_done_cb+0x12/0x20 [gpu_sched]
>>> [11176.741284]  dma_fence_signal_timestamp_locked+0xac/0x1a0
>>> [11176.741290]  dma_fence_signal+0x3b/0x80
>>> [11176.741296]  amdgpu_fence_process+0xd1/0x140 [amdgpu]
>>> [11176.741504]  sdma_v4_0_process_trap_irq+0x8c/0xb0 [amdgpu]
>>> [11176.741731]  amdgpu_irq_dispatch+0xce/0x250 [amdgpu]
>>> [11176.741954]  amdgpu_ih_process+0x81/0x100 [amdgpu]
>>> [11176.742174]  amdgpu_irq_handler+0x26/0xa0 [amdgpu]
>>> [11176.742393]  __handle_irq_event_percpu+0x4f/0x2c0
>>> [11176.742402]  handle_irq_event_percpu+0x33/0x80
>>> [11176.742408]  handle_irq_event+0x39/0x60
>>> [11176.742414]  handle_edge_irq+0x93/0x1d0
>>> [11176.742419]  __common_interrupt+0x50/0xe0
>>> [11176.742426]  common_interrupt+0x80/0x90
>>> [11176.742431]  </IRQ>
>>> [11176.742436]  asm_common_interrupt+0x1e/0x40
>>> [11176.742442] RIP: 0010:cpuidle_enter_state+0xff/0x470
>>> [11176.742449] Code: 0f a3 05 04 54 24 01 0f 82 70 02 00 00 31 ff e8 
>>> 37 5d 6f ff 80 7d d7 00 0f 85 e9 01 00 00 e8 58 a2 7f ff fb 66 0f 1f 
>>> 44 00 00 <45> 85 ff 0f 88 01 01 00 00 49 63 c7 4c 2b 75 c8 48 8d 14 
>>> 40 48 8d
>>> [11176.742455] RSP: 0018:ffffb6970021fe48 EFLAGS: 00000202
>>> [11176.742461] RAX: 000000000059be25 RBX: 0000000000000002 RCX: 
>>> 0000000000000000
>>> [11176.742465] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
>>> ffffffff9efeed78
>>> [11176.742470] RBP: ffffb6970021fe80 R08: 0000000000000001 R09: 
>>> 0000000000000001
>>> [11176.742473] R10: 0000000000000001 R11: 0000000000000001 R12: 
>>> ffff9c3350b0e800
>>> [11176.742477] R13: ffffffffa00e9680 R14: 00000a2a49ada060 R15: 
>>> 0000000000000002
>>> [11176.742483]  ? cpuidle_enter_state+0xf8/0x470
>>> [11176.742489]  ? cpuidle_enter_state+0xf8/0x470
>>> [11176.742495]  cpuidle_enter+0x2e/0x40
>>> [11176.742500]  call_cpuidle+0x23/0x40
>>> [11176.742506]  do_idle+0x201/0x280
>>> [11176.742512]  cpu_startup_entry+0x20/0x30
>>> [11176.742517]  start_secondary+0x11f/0x160
>>> [11176.742523]  secondary_startup_64_no_verify+0xb0/0xbb
>>>
>


More information about the amd-gfx mailing list