[PATCH] drm/xe/guc: In guc_ct_send_recv flush g2h worker if g2h resp times out

Nilawar, Badal badal.nilawar at intel.com
Tue Oct 1 17:49:53 UTC 2024



On 01-10-2024 13:41, Nilawar, Badal wrote:
> 
> 
> On 28-09-2024 02:57, Matthew Brost wrote:
>> On Sat, Sep 28, 2024 at 12:54:28AM +0530, Badal Nilawar wrote:
>>> It is observed that for GuC CT request G2H IRQ triggered and g2h_worker
>>> queued, but it didn't get opportunity to execute and timeout occurred.
>>> To address this the g2h_worker is being flushed.
>>>
>>> Cc: John Harrison <John.C.Harrison at Intel.com>
>>> Signed-off-by: Badal Nilawar <badal.nilawar at intel.com>
>>> ---
>>>   drivers/gpu/drm/xe/xe_guc_ct.c | 11 +++++++++++
>>>   1 file changed, 11 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/ 
>>> xe_guc_ct.c
>>> index 4b95f75b1546..4a5d7f85d1a0 100644
>>> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
>>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
>>> @@ -903,6 +903,17 @@ static int guc_ct_send_recv(struct xe_guc_ct 
>>> *ct, const u32 *action, u32 len,
>>>       }
>>>       ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
>>> +
>>> +    /*
>>> +     * It is observed that for above GuC CT request G2H IRQ triggered
>>
>> Where is this observed. 1 second is a long to wait for a worker...
> 
> Please see this log.
> 
> [  176.602482] xe 0000:00:02.0: [drm:xe_guc_pc_get_min_freq [xe]] GT0: 
> GT[0] GuC PC status query
> [  176.603019] xe 0000:00:02.0: [drm:xe_guc_irq_handler [xe]] GT0: G2H 
> IRQ GT[0]
> [  176.603449] xe 0000:00:02.0: [drm:g2h_worker_func [xe]] GT0: G2H work 
> running GT[0]
> [  176.604379] xe 0000:00:02.0: [drm:xe_guc_pc_get_max_freq [xe]] GT0: 
> GT[0] GuC PC status query
> [  176.605464] xe 0000:00:02.0: [drm:xe_guc_irq_handler [xe]] GT0: G2H 
> IRQ GT[0]
> [  176.605821] xe 0000:00:02.0: [drm:g2h_worker_func [xe]] GT0: G2H work 
> running GT[0]
> [  176.716699] xe 0000:00:02.0: [drm] GT0: trying reset
> [  176.716718] xe 0000:00:02.0: [drm] GT0: GuC PC status query    //GuC 
> PC check request
> [  176.717648] xe 0000:00:02.0: [drm:xe_guc_irq_handler [xe]] GT0: G2H 
> IRQ GT[0]  // IRQ
> [  177.728637] xe 0000:00:02.0: [drm] *ERROR* GT0: Timed out wait for 
> G2H, fence 1311, action 3003  //Timeout
> [  177.737637] xe 0000:00:02.0: [drm] *ERROR* GT0: GuC PC query task 
> state failed: -ETIME
> [  177.745644] xe 0000:00:02.0: [drm] GT0: reset queued
> [  177.849081] xe 0000:00:02.0: [drm:xe_guc_pc_get_min_freq [xe]] GT0: 
> GT[0] GuC PC status query
> [  177.849659] xe 0000:00:02.0: [drm:xe_guc_irq_handler [xe]] GT0: G2H 
> IRQ GT[0]
> [  178.632672] xe 0000:00:02.0: [drm] GT0: reset started
> [  178.632639] xe 0000:00:02.0: [drm:g2h_worker_func [xe]] GT0: G2H work 
> running GT[0] // Worker ran
> [  178.632897] xe 0000:00:02.0: [drm] GT0: G2H fence (1311) not found!
> 
>>
>>> +     * and g2h_worker queued, but it didn't get opportunity to execute
>>> +     * and timeout occurred. To address the g2h_worker is being 
>>> flushed.
>>> +     */
>>> +    if (!ret) {
>>> +        flush_work(&ct->g2h_worker);
>>> +        ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
>>
>> If this is needed I wouldn't wait 1 second, if the flush worked
>> 'g2h_fence.done' should immediately be signaled. Maybe wait 1 MS?
> 
> In config HZ is set to 250, which is 4 ms I think.
> 
> CONFIG_HZ_250=y
> # CONFIG_HZ_300 is not set
> # CONFIG_HZ_1000 is not set
> CONFIG_HZ=250

Got your point. As flush will wait for work to finish last queuing 
instance 1ms is enough.

-Badal

> 
> Regards,
> Badal
> 
>>
>> Matt
>>
>>> +    }
>>> +
>>>       if (!ret) {
>>>           xe_gt_err(gt, "Timed out wait for G2H, fence %u, action %04x",
>>>                 g2h_fence.seqno, action[0]);
>>> -- 
>>> 2.34.1
>>>
> 



More information about the Intel-xe mailing list