[PATCH] drm/xe/guc/tlb: Flush g2h worker in case of tlb timeout
Nilawar, Badal
badal.nilawar at intel.com
Thu Oct 24 13:00:56 UTC 2024
On 24-10-2024 15:47, Nirmoy Das wrote:
>
> On 10/24/2024 12:02 PM, Nilawar, Badal wrote:
>>
>>
>> On 23-10-2024 20:43, Nirmoy Das wrote:
>>> Flush the g2h worker explicitly if TLB timeout happens which is
>>> observed on LNL and that points recent scheduling issue with E-cores.
>>> This is similar to the recent fix:
>>> commit e51527233804 ("drm/xe/guc/ct: Flush g2h worker in case of g2h
>>> response timeout") and should be removed once there is E core
>>> scheduling fix.
>>>
>>> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2687
>>> Cc: Badal Nilawar <badal.nilawar at intel.com>
>>> Cc: Matthew Brost <matthew.brost at intel.com>
>>> Cc: Matthew Auld <matthew.auld at intel.com>
>>> Cc: John Harrison <John.C.Harrison at Intel.com>
>>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
>>> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
>>> Signed-off-by: Nirmoy Das <nirmoy.das at intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 9 +++++++++
>>> 1 file changed, 9 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>> index 773de1f08db9..2c327dccbd74 100644
>>> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>> @@ -72,6 +72,15 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
>>> struct xe_device *xe = gt_to_xe(gt);
>>> struct xe_gt_tlb_invalidation_fence *fence, *next;
>>> + /*
>>> + * This is analogous to e51527233804 ("drm/xe/guc/ct: Flush g2h worker
>>> + * in case of g2h response timeout")
>>> + *
>>> + * TODO: Drop this change once workqueue scheduling delay issue is
>>> + * fixed on LNL Hybrid CPU.
>>> + */
>>> + flush_work(>->uc.guc.ct.g2h_worker);
>>
>> I didn't get the idea of flushing g2h worker here. Moreover AFAIK tlb invalidation is handled in fast path xe_guc_ct_fast_path i.e. in IRQ handler itself. Is this change solving the issue.
>
> AFAIU g2h worker can also handle TLB_INVALIDATION_DONE message from GuC(process_g2h_msg). This indeed fixes the issue from me for LNL.
Agreed, it does handle in the slow path as well, but upon receiving an
IRQ, it will be managed in the fast path.
So I suspect this is a case of an G2H interrupt miss rather than a G2H
worker delay due to the efficient cores in LNL.
For now, this change can proceed as it is helping out, but considering
the possibility of an interrupt miss, I suggest debugging from that
perspective.
In another thread, Himal mentioned that this issue is also observed on
BMG, which strengthens the possibility of an G2H interrupt miss.
Regards,
Badal
>
>
> Regards,
>
> Nirmoy
>
>>
>> static inline void xe_guc_ct_irq_handler(struct xe_guc_ct *ct)
>> {
>> if (!xe_guc_ct_enabled(ct))
>> return;
>>
>> wake_up_all(&ct->wq);
>> queue_work(ct->g2h_wq, &ct->g2h_worker);
>> xe_guc_ct_fast_path(ct);
>> }
>>
>> Regards,
>> Badal
>>
>>> +
>>> spin_lock_irq(>->tlb_invalidation.pending_lock);
>>> list_for_each_entry_safe(fence, next,
>>> >->tlb_invalidation.pending_fences, link) {
>>
More information about the Intel-xe
mailing list