<div dir="ltr"><div>Thanks for the feedback, the problem is anyway real breaking userspace apps if my patch is not in use. I have actually spend this day for investigating and testing another gpu hang bug that has been reported originally by others on gfx1010/AMD RX 5700. I thought originally that the bug is different because I was not able to trigger it in the test app that crashes the kernel on gfx1103.</div><div><br></div><div>With gfx1010 I need to run the pytorch gpu benchmark which does more heavy calculation. In kernel side the symptom is same, kernel fails to remove the queue on similar type of evict/restore cycle that the kernel seems to do constantly. This bug has one annoying side-effect, regular user level reboot will hang requiring to use power button to shut down the device. (echo b >/proc/sysrq-trigger works sometimes)</div><div><br></div><div>Anyway, I have managed to get the gfx1010 to also stay stable and finish the benchmarks if I do a similar type of fix/workaround that prevents the queue remove/restore to happen on evict and restore methods.<br></div><div><br></div><div>It may or may not be in reality a firmware bug, hard to debug as I do not access to firmware code. But I think this should be fixed somehow anyway. (Kernel has tons of workaround anyway for other broken firmware and hw problems)<br></div><div><br></div><div>I can however try to approach this in some other way also, would you have any suggestion? I have played with the recent AMD gpu kernel driver stack for a couple of days, so I probably miss something but here are 2 observations/questions I have in my mind?<br></div><div><br></div><div>1) Is it really necessary to evict/restore the queues also on firmware until they really need to be deleted more permanently? I mean would it be just enough to mark queues disabled/enabled in kernel-structure when pre-emption happens?<br></div><div><br></div><div>2) dqm_lock that is used to protect the queue-lists that are removed/restored uses memalloc_noreclaim_save/restore calls that according to documentation can easily cause problems if there happens some fs calls or recursions. Could the userspace be able to trigger that problem by using some amdgpu specific sysfs interface calls. Or can the MES firmware somehow call back to kernel functions that cause recursive loop while performing the queue remove method calls?</div><div><br></div><div>Below is the gfx1010 dmesg with added trace calls that reveals kernel problems with queues while using that device.</div><div>I have again added some extra strace to to print out the function name when its started and what is the caller method.<br></div><div><br></div><div>884.437695] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 884.437704] amdgpu: evict_process_queues_cpsch started<br>[ 884.443511] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 884.443520] amdgpu: restore_process_queues_cpsch started<br>[ 907.375917] amdgpu: evict_process_queues_cpsch started<br>[ 907.375981] amdgpu: evict_process_worker Finished evicting pasid 0x8005<br>[ 907.483535] amdgpu: restore_process_queues_cpsch started<br>[ 909.013279] amdgpu: kgd2kfd_quiesce_mm called by svm_range_evict<br>[ 909.013286] amdgpu: evict_process_queues_cpsch started<br>[ 909.033675] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 909.033681] amdgpu: evict_process_queues_cpsch started<br>[ 909.059674] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 909.059680] amdgpu: restore_process_queues_cpsch started<br>[ 909.082565] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 909.082572] amdgpu: evict_process_queues_cpsch started<br>[ 909.295184] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 909.295190] amdgpu: restore_process_queues_cpsch started<br>[ 909.608840] amdgpu: kgd2kfd_resume_mm called by svm_range_restore_work<br>[ 909.608846] amdgpu: restore_process_queues_cpsch started<br>[ 966.354867] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 966.354876] amdgpu: evict_process_queues_cpsch started<br>[ 966.361293] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 966.361303] amdgpu: restore_process_queues_cpsch started<br>[ 984.457200] amdgpu: evict_process_queues_cpsch started<br>[ 984.457261] amdgpu: evict_process_worker Finished evicting pasid 0x8005<br>[ 984.562403] amdgpu: restore_process_queues_cpsch started<br>[ 984.628620] amdgpu: kgd2kfd_quiesce_mm called by svm_range_evict<br>[ 984.628627] amdgpu: evict_process_queues_cpsch started<br>[ 984.650436] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 984.650443] amdgpu: evict_process_queues_cpsch started<br>[ 984.718544] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 984.718550] amdgpu: restore_process_queues_cpsch started<br>[ 984.738360] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 984.738367] amdgpu: evict_process_queues_cpsch started<br>[ 984.765031] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 984.765038] amdgpu: restore_process_queues_cpsch started<br>[ 984.785180] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 984.785187] amdgpu: evict_process_queues_cpsch started<br>[ 984.907430] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 984.907435] amdgpu: restore_process_queues_cpsch started<br>[ 984.930399] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 984.930405] amdgpu: evict_process_queues_cpsch started<br>[ 984.956551] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 984.956561] amdgpu: restore_process_queues_cpsch started<br>[ 985.288614] amdgpu: kgd2kfd_resume_mm called by svm_range_restore_work<br>[ 985.288621] amdgpu: restore_process_queues_cpsch started<br>[ 998.410978] amdgpu: evict_process_queues_cpsch started<br>[ 998.411041] amdgpu: evict_process_worker Finished evicting pasid 0x8005<br>[ 998.513922] amdgpu: restore_process_queues_cpsch started<br>[ 998.531861] amdgpu: kgd2kfd_quiesce_mm called by svm_range_evict<br>[ 998.531867] amdgpu: evict_process_queues_cpsch started<br>[ 998.553650] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 998.553656] amdgpu: evict_process_queues_cpsch started<br>[ 998.581235] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 998.581241] amdgpu: restore_process_queues_cpsch started<br>[ 998.607168] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 998.607174] amdgpu: evict_process_queues_cpsch started<br>[ 998.700499] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 998.700506] amdgpu: restore_process_queues_cpsch started<br>[ 998.718179] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 998.718187] amdgpu: evict_process_queues_cpsch started<br>[ 998.810595] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 998.810603] amdgpu: restore_process_queues_cpsch started<br>[ 998.831776] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 998.831782] amdgpu: evict_process_queues_cpsch started<br>[ 998.858199] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 998.858205] amdgpu: restore_process_queues_cpsch started<br>[ 998.880604] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 998.880611] amdgpu: evict_process_queues_cpsch started<br>[ 998.912335] amdgpu: kgd2kfd_resume_mm called by amdgpu_amdkfd_restore_userptr_worker<br>[ 998.912343] amdgpu: restore_process_queues_cpsch started<br>[ 999.237449] amdgpu: kgd2kfd_resume_mm called by svm_range_restore_work<br>[ 999.237455] amdgpu: restore_process_queues_cpsch started<br>[ 1058.513361] amdgpu: kgd2kfd_quiesce_mm called by amdgpu_amdkfd_evict_userptr<br>[ 1058.513373] amdgpu: evict_process_queues_cpsch started<br>[ 1062.513487] amdgpu 0000:03:00.0: amdgpu: Queue preemption failed for queue with doorbell_id: 80004008<br>[ 1062.513500] amdgpu 0000:03:00.0: amdgpu: Failed to evict process queue 0, caller: kgd2kfd_quiesce_mm<br>[ 1062.513503] amdgpu: Failed to quiesce KFD<br>[ 1062.513551] amdgpu 0000:03:00.0: amdgpu: GPU reset begin!<br>[ 1062.513628] amdgpu: evict_process_queues_cpsch started<br>[ 1062.513694] amdgpu 0000:03:00.0: amdgpu: Dumping IP State<br>[ 1062.517229] amdgpu 0000:03:00.0: amdgpu: Dumping IP State Completed<br>[ 1062.866910] amdgpu 0000:03:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring kiq_0.2.1.0 test failed (-110)<br>[ 1062.867435] [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* KCQ disable failed<br>[ 1062.915075] amdgpu 0000:03:00.0: amdgpu: BACO reset<br>[ 1062.937902] amdgpu: kgd2kfd_quiesce_mm called by svm_range_evict<br>[ 1062.937907] amdgpu: evict_process_queues_cpsch started</div><div><br></div><br><div><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Wed, Nov 27, 2024 at 3:50 PM Felix Kuehling <<a href="mailto:felix.kuehling@amd.com">felix.kuehling@amd.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
On 2024-11-27 06:51, Christian König wrote:<br>
> Am 27.11.24 um 12:46 schrieb Mika Laitio:<br>
>> AMD gfx1103 / M780 iGPU will crash eventually when used for<br>
>> pytorch ML/AI operations on rocm sdk stack. After kernel error<br>
>> the application exits on error and linux desktop can itself<br>
>> sometimes either freeze or reset back to login screen.<br>
>><br>
>> Error will happen randomly when kernel calls <br>
>> evict_process_queues_cpsch and<br>
>> restore_process_queues_cpsch methods to remove and restore the queues<br>
>> that has been created earlier.<br>
>><br>
>> The fix is to remove the evict and restore calls when device used is<br>
>> iGPU. The queues that has been added during the user space <br>
>> application execution<br>
>> time will still be removed when the application exits<br>
><br>
> As far as I can see that is absolutely not a fix but rather a <br>
> obviously broken workaround.<br>
><br>
> Evicting and restoring queues is usually mandatory for correct operation.<br>
><br>
> So just ignore that this doesn't work will just is not something you <br>
> can do.<br>
<br>
I agree. Eviction happens for example in MMU notifiers where we need to <br>
assure the kernel that memory won't be accessed by the GPU once the <br>
notifier returns, until the memory mappings in the GPU page tables can <br>
be revalidated.<br>
<br>
This looks like a crude workaround for an MES firmware problem or some <br>
other kind of intermittent hang that needs to be root-caused. It's a <br>
NACK from me as well.<br>
<br>
Regards,<br>
Felix<br>
<br>
<br>
><br>
> Regards,<br>
> Christian.<br>
><br>
>><br>
>> On evety test attempts the crash has always happened on the<br>
>> same location while removing the 2nd queue of 3 with doorbell id 0x1002.<br>
>><br>
>> Below is the trace captured by adding more printouts to problem<br>
>> location to print message also when the queue is evicted or resrored<br>
>> succesfully.<br>
>><br>
>> [ 948.324174] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1202, queue: 2, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 948.334344] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1002, queue: 1, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 948.344499] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1000, queue: 0, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 952.380614] amdgpu 0000:c4:00.0: amdgpu: remove_queue_mes removed <br>
>> hardware queue from MES, doorbell=0x1202, queue: 2, caller: <br>
>> evict_process_queues_cpsch<br>
>> [ 952.391330] amdgpu 0000:c4:00.0: amdgpu: remove_queue_mes removed <br>
>> hardware queue from MES, doorbell=0x1002, queue: 1, caller: <br>
>> evict_process_queues_cpsch<br>
>> [ 952.401634] amdgpu 0000:c4:00.0: amdgpu: remove_queue_mes removed <br>
>> hardware queue from MES, doorbell=0x1000, queue: 0, caller: <br>
>> evict_process_queues_cpsch<br>
>> [ 952.414507] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1202, queue: 2, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 952.424618] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1002, queue: 1, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 952.434922] amdgpu 0000:c4:00.0: amdgpu: add_queue_mes added <br>
>> hardware queue to MES, doorbell=0x1000, queue: 0, caller: <br>
>> restore_process_queues_cpsch<br>
>> [ 952.446272] amdgpu 0000:c4:00.0: amdgpu: remove_queue_mes removed <br>
>> hardware queue from MES, doorbell=0x1202, queue: 2, caller: <br>
>> evict_process_queues_cpsch<br>
>> [ 954.460341] amdgpu 0000:c4:00.0: amdgpu: MES failed to respond to <br>
>> msg=REMOVE_QUEUE<br>
>> [ 954.460356] amdgpu 0000:c4:00.0: amdgpu: remove_queue_mes failed <br>
>> to remove hardware queue from MES, doorbell=0x1002, queue: 1, caller: <br>
>> evict_process_queues_cpsch<br>
>> [ 954.460360] amdgpu 0000:c4:00.0: amdgpu: MES might be in <br>
>> unrecoverable state, issue a GPU reset<br>
>> [ 954.460366] amdgpu 0000:c4:00.0: amdgpu: Failed to evict queue 1<br>
>> [ 954.460368] amdgpu 0000:c4:00.0: amdgpu: Failed to evict process <br>
>> queues<br>
>> [ 954.460439] amdgpu 0000:c4:00.0: amdgpu: GPU reset begin!<br>
>> [ 954.460464] amdgpu 0000:c4:00.0: amdgpu: remove_all_queues_mes: <br>
>> Failed to remove queue 0 for dev 5257<br>
>> [ 954.460515] amdgpu 0000:c4:00.0: amdgpu: Dumping IP State<br>
>> [ 954.462637] amdgpu 0000:c4:00.0: amdgpu: Dumping IP State Completed<br>
>> [ 955.865591] amdgpu: process_termination_cpsch started<br>
>> [ 955.866432] amdgpu: process_termination_cpsch started<br>
>> [ 955.866445] amdgpu 0000:c4:00.0: amdgpu: Failed to remove queue 0<br>
>> [ 956.503043] amdgpu 0000:c4:00.0: amdgpu: MES failed to respond to <br>
>> msg=REMOVE_QUEUE<br>
>> [ 956.503059] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* <br>
>> failed to unmap legacy queue<br>
>> [ 958.507491] amdgpu 0000:c4:00.0: amdgpu: MES failed to respond to <br>
>> msg=REMOVE_QUEUE<br>
>> [ 958.507507] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* <br>
>> failed to unmap legacy queue<br>
>> [ 960.512077] amdgpu 0000:c4:00.0: amdgpu: MES failed to respond to <br>
>> msg=REMOVE_QUEUE<br>
>> [ 960.512093] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* <br>
>> failed to unmap legacy queue<br>
>> [ 960.785816] [drm:gfx_v11_0_hw_fini [amdgpu]] *ERROR* failed to <br>
>> halt cp gfx<br>
>><br>
>> Signed-off-by: Mika Laitio <<a href="mailto:lamikr@gmail.com" target="_blank">lamikr@gmail.com</a>><br>
>> ---<br>
>> .../drm/amd/amdkfd/kfd_device_queue_manager.c | 24 ++++++++++++-------<br>
>> 1 file changed, 16 insertions(+), 8 deletions(-)<br>
>><br>
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c <br>
>> b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c<br>
>> index c79fe9069e22..96088d480e09 100644<br>
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c<br>
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c<br>
>> @@ -1187,9 +1187,12 @@ static int evict_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> struct kfd_process_device *pdd;<br>
>> int retval = 0;<br>
>> + // gfx1103 APU can fail to remove queue on evict/restore cycle<br>
>> + if (dqm->dev->adev->flags & AMD_IS_APU)<br>
>> + goto out;<br>
>> dqm_lock(dqm);<br>
>> if (qpd->evicted++ > 0) /* already evicted, do nothing */<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> pdd = qpd_to_pdd(qpd);<br>
>> @@ -1198,7 +1201,7 @@ static int evict_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> * Skip queue eviction on process eviction.<br>
>> */<br>
>> if (!pdd->drm_priv)<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> pr_debug_ratelimited("Evicting PASID 0x%x queues\n",<br>
>> pdd->process->pasid);<br>
>> @@ -1219,7 +1222,7 @@ static int evict_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> if (retval) {<br>
>> dev_err(dev, "Failed to evict queue %d\n",<br>
>> q->properties.queue_id);<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> }<br>
>> }<br>
>> }<br>
>> @@ -1231,8 +1234,9 @@ static int evict_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0,<br>
>> USE_DEFAULT_GRACE_PERIOD);<br>
>> -out:<br>
>> +out_unlock:<br>
>> dqm_unlock(dqm);<br>
>> +out:<br>
>> return retval;<br>
>> }<br>
>> @@ -1326,14 +1330,17 @@ static int <br>
>> restore_process_queues_cpsch(struct device_queue_manager *dqm,<br>
>> uint64_t eviction_duration;<br>
>> int retval = 0;<br>
>> + // gfx1103 APU can fail to remove queue on evict/restore cycle<br>
>> + if (dqm->dev->adev->flags & AMD_IS_APU)<br>
>> + goto out;<br>
>> pdd = qpd_to_pdd(qpd);<br>
>> dqm_lock(dqm);<br>
>> if (WARN_ON_ONCE(!qpd->evicted)) /* already restored, do <br>
>> nothing */<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> if (qpd->evicted > 1) { /* ref count still > 0, decrement & <br>
>> quit */<br>
>> qpd->evicted--;<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> }<br>
>> /* The debugger creates processes that temporarily have not <br>
>> acquired<br>
>> @@ -1364,7 +1371,7 @@ static int restore_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> if (retval) {<br>
>> dev_err(dev, "Failed to restore queue %d\n",<br>
>> q->properties.queue_id);<br>
>> - goto out;<br>
>> + goto out_unlock;<br>
>> }<br>
>> }<br>
>> }<br>
>> @@ -1375,8 +1382,9 @@ static int restore_process_queues_cpsch(struct <br>
>> device_queue_manager *dqm,<br>
>> atomic64_add(eviction_duration, &pdd->evict_duration_counter);<br>
>> vm_not_acquired:<br>
>> qpd->evicted = 0;<br>
>> -out:<br>
>> +out_unlock:<br>
>> dqm_unlock(dqm);<br>
>> +out:<br>
>> return retval;<br>
>> }<br>
><br>
</blockquote></div>