[PATCH] drm/xe/guc/tlb: Flush g2h worker in case of tlb timeout

Nirmoy Das nirmoy.das at intel.com
Thu Oct 24 13:22:04 UTC 2024


On 10/24/2024 3:11 PM, Matthew Auld wrote:
> On 24/10/2024 14:00, Nilawar, Badal wrote:
>>
>>
>> On 24-10-2024 15:47, Nirmoy Das wrote:
>>>
>>> On 10/24/2024 12:02 PM, Nilawar, Badal wrote:
>>>>
>>>>
>>>> On 23-10-2024 20:43, Nirmoy Das wrote:
>>>>> Flush the g2h worker explicitly if TLB timeout happens which is
>>>>> observed on LNL and that points recent scheduling issue with E-cores.
>>>>> This is similar to the recent fix:
>>>>> commit e51527233804 ("drm/xe/guc/ct: Flush g2h worker in case of g2h
>>>>> response timeout") and should be removed once there is E core
>>>>> scheduling fix.
>>>>>
>>>>> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2687
>>>>> Cc: Badal Nilawar <badal.nilawar at intel.com>
>>>>> Cc: Matthew Brost <matthew.brost at intel.com>
>>>>> Cc: Matthew Auld <matthew.auld at intel.com>
>>>>> Cc: John Harrison <John.C.Harrison at Intel.com>
>>>>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
>>>>> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
>>>>> Signed-off-by: Nirmoy Das <nirmoy.das at intel.com>
>>>>> ---
>>>>>    drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 9 +++++++++
>>>>>    1 file changed, 9 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>>> index 773de1f08db9..2c327dccbd74 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>>> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>>> @@ -72,6 +72,15 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
>>>>>        struct xe_device *xe = gt_to_xe(gt);
>>>>>        struct xe_gt_tlb_invalidation_fence *fence, *next;
>>>>>    +    /*
>>>>> +     * This is analogous to e51527233804 ("drm/xe/guc/ct: Flush g2h worker
>>>>> +     * in case of g2h response timeout")
>>>>> +     *
>>>>> +     * TODO: Drop this change once workqueue scheduling delay issue is
>>>>> +     * fixed on LNL Hybrid CPU.
>>>>> +     */
>>>>> +    flush_work(&gt->uc.guc.ct.g2h_worker);
>>>>
>>>> I didn't get the idea of flushing g2h worker here. Moreover AFAIK tlb invalidation is handled in fast path xe_guc_ct_fast_path i.e. in IRQ handler itself. Is this change solving the issue.
>>>
>>> AFAIU g2h worker can also handle TLB_INVALIDATION_DONE message from GuC(process_g2h_msg). This indeed fixes the issue from me for LNL.
>>
>> Agreed, it does handle in the slow path as well, but upon receiving an IRQ, it will be managed in the fast path.
>> So I suspect this is a case of an G2H interrupt miss rather than a G2H worker delay due to the efficient cores in LNL.
>> For now, this change can proceed as it is helping out, but considering the possibility of an interrupt miss, I suggest debugging from that perspective.
>> In another thread, Himal mentioned that this issue is also observed on BMG, which strengthens the possibility of an G2H interrupt miss.
>
> Note that we currently still process the G2H events in-order, so if there is something earlier in the queue that can't be safely processed in the irq then we leave it to the worker to handle. So we might get an irq for the tlb invalidation completion and yet be unable to process it in the irq.


Interesting. I haven't tried it on BMG/DG2 yet but this issue appear very quick on LNL. Leaving tlb done handling to worker would hit the LNL scheduling issue more often than on other platforms.


Thanks,

Nirmoy

>
>>
>> Regards,
>> Badal
>>
>>>
>>>
>>> Regards,
>>>
>>> Nirmoy
>>>
>>>>
>>>> static inline void xe_guc_ct_irq_handler(struct xe_guc_ct *ct)
>>>> {
>>>>          if (!xe_guc_ct_enabled(ct))
>>>>                  return;
>>>>
>>>>          wake_up_all(&ct->wq);
>>>>          queue_work(ct->g2h_wq, &ct->g2h_worker);
>>>>          xe_guc_ct_fast_path(ct);
>>>> }
>>>>
>>>> Regards,
>>>> Badal
>>>>
>>>>> +
>>>>>        spin_lock_irq(&gt->tlb_invalidation.pending_lock);
>>>>>        list_for_each_entry_safe(fence, next,
>>>>>                     &gt->tlb_invalidation.pending_fences, link) {
>>>>
>>


More information about the Intel-xe mailing list