[PATCH] drm/xe/vm: Keep the device awake for TLB inval

Nirmoy Das nirmoy.das at intel.com
Wed Jul 17 12:36:37 UTC 2024


On 7/16/2024 6:32 PM, Matthew Brost wrote:
> On Tue, Jul 16, 2024 at 06:25:01PM +0200, Nirmoy Das wrote:
>>     Hi Matt,
>>
> Outlook reply? Prefer a Linux email client for the list for proper threading.
>
>>     On 7/16/2024 5:45 PM, Matthew Brost wrote:
>>
>> On Tue, Jul 16, 2024 at 03:38:55PM +0200, Nirmoy Das wrote:
>>
>> GT can suspend while TLB invalidation is happening in the background.
>> This would cause a TLB timeout when that happens. Keep the device awake
>> when using fence which doesn't wait for the TLB invalidation to finish.
>>
>> Cc: Matthew Brost [1]<matthew.brost at intel.com>
>> Signed-off-by: Nirmoy Das [2]<nirmoy.das at intel.com>
>>
>> + Rodrigo our local PM expert.
>>
>>
>> ---
>> Adding strace here for more information:
>>
>> xe_pm-18095   [001] .....  3493.481048: xe_vma_unbind: dev=0000:00:02.0, vma=fff
>> f8881c3062b00, asid=0x0000f, start=0x0000001a0000, end=0x0000001a1fff, userptr=0
>> x000000000000,
>> xe_pm-18095   [001] .....  3493.481063: xe_vm_cpu_bind: dev=0000:00:02.0, vm=fff
>> f88812a00d000, asid=0x0000f
>> xe_pm-18095   [001] .....  3493.481093: xe_gt_tlb_invalidation_fence_create: dev
>> =0000:00:02.0, fence=ffff88811bf3d000, seqno=0
>> xe_pm-18095   [001] .....  3493.481095: xe_gt_tlb_invalidation_fence_work_func:
>> dev=0000:00:02.0, fence=ffff88811bf3d000, seqno=0
>> xe_pm-18095   [001] .....  3493.481097: xe_gt_tlb_TL_fence_send: dev=0000:00:02.
>> 0, fence=ffff88811bf3d000, seqno=93
>> xe_pm-18095   [001] d..1.  3493.481097: xe_guc_ctb_h2g: H2G CTB: dev=0000:00:02.
>> 0, gt0: action=0x7000, len=8, tail=44, head=36
>> kworker/1:2-17900   [001] .....  3493.481302: xe_exec_queue_stop: dev=0000:00:02
>> .0, 3:0x2, gt=0, width=1, guc_id=0, guc_state=0x0, flags=0x13
>> kworker/1:2-17900   [001] .....  3493.481303: xe_exec_queue_stop: dev=0000:00:02
>> .0, 3:0x1, gt=0, width=1, guc_id=1, guc_state=0x0, flags=0x4
>> kworker/1:2-17900   [001] .....  3493.481305: xe_exec_queue_stop: dev=0000:00:02
>> .0, 0:0x1, gt=0, width=1, guc_id=2, guc_state=0x0, flags=0x0
>> xe_pm-18095   [001] .....  3493.756294: xe_guc_ctb_h2g: H2G CTB: dev=0000:00:02.
>> 0, gt0: action=0x3003, len=5, tail=5, head=0
>> xe_pm-18095   [001] d..1.  3493.756470: xe_guc_ctb_h2g: H2G CTB: dev=0000:00:02.
>> 0, gt0: action=0x3003, len=5, tail=10, head=5
>> kworker/u32:1-17912   [006] d..1.  3493.756535: xe_guc_ctb_g2h: G2H CTB: dev=000
>> 0:00:02.0, gt0: action=0x0, len=2, tail=2, head=2
>> xe_pm-18095   [001] .....  3493.756557: xe_guc_ctb_h2g: H2G CTB: dev=0000:00:02.
>> 0, gt0: action=0x3003, len=5, tail=15, head=10
>> xe_pm-18095   [001] .....  3493.756559: xe_guc_ctb_h2g: H2G CTB: dev=0000:00:02.
>> 0, gt0: action=0x3004, len=3, tail=18, head=10
>> kworker/1:2-17900   [001] d..1.  3497.951783: xe_gt_tlb_invalidation_fence_timeo
>> ut: dev=0000:00:02.0, fence=ffff88811bf3d000, seqno=93
>>
>>
>> How do you know from this the device is suspending? I can't tell that is
>> happening. I do think this raises a good point that suspend / resume
>> should be added to ftrace as that is useful information.
>>
>>     xe_exec_queue_stop() was coming from xe runtime suspend code. I am
>>     pretty sure about it but I could double check it.
>>
> That would be a good idea.


xe_pm-69228   [003] .....  7390.584812: xe_vma_unbind: dev=0000:00:02.0, 
vma=ffff888132716e00, asid=0x00027, start=0x0000001a0000, 
end=0x0000001a1fff, userptr=0x000000000000,
xe_pm-69228   [003] .....  7390.584834: xe_vm_cpu_bind: 
dev=0000:00:02.0, vm=ffff8881a00f0800, asid=0x00027
xe_pm-69228   [003] .....  7390.584871: 
xe_gt_tlb_invalidation_fence_create: dev=0000:00:02.0, 
fence=ffff88813270b400, seqno=0
xe_pm-69228   [003] .....  7390.584874: 
xe_gt_tlb_invalidation_fence_work_func: dev=0000:00:02.0, 
fence=ffff88813270b400, seqno=0
xe_pm-69228   [003] .....  7390.584875: 
xe_gt_tlb_invalidation_fence_send: dev=0000:00:02.0, 
fence=ffff88813270b400, seqno=213

xe_pm-69228   [003] d..1.  7390.584877: xe_guc_ctb_h2g: H2G CTB: 
dev=0000:00:02.0, gt0: action=0x7000, len=8, tail=44, head=36
xe_pm-69228   [003] .....  7390.585030: xe_pm_runtime_put: 
dev=0000:00:02.0 caller_function=xe_drm_ioctl+0xfd/0x140 [xe]
kworker/3:6-69022   [003] .....  7390.585050: xe_pm_runtime_suspend: 
dev=0000:00:02.0 caller_function=xe_pci_runtime_suspend+0x3f/0x120 [xe]

kworker/3:6-69022   [003] .....  7390.585134: xe_exec_queue_stop: 
dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0, guc_state=0x0, flags=0x13
kworker/3:6-69022   [003] .....  7390.585138: xe_exec_queue_stop: 
dev=0000:00:02.0, 3:0x1, gt=0, width=1, guc_id=1, guc_state=0x0, flags=0x4

xe_pm-69228   [003] .N...  7390.585171: xe_pm_runtime_get_ioctl: 
dev=0000:00:02.0 caller_function=xe_drm_ioctl+0xdc/0x140 [xe]
kworker/3:6-69022   [003] .....  7390.585622: xe_exec_queue_stop: 
dev=0000:00:02.0, 2:0x1, gt=1, width=1, guc_id=0, guc_state=0x0, flags=0x0
xe_pm-69228   [003] .....  7390.610680: xe_pm_runtime_resume: 
dev=0000:00:02.0 caller_function=xe_pci_runtime_resume+0xb8/0xe0 [xe]
xe_pm-69228   [003] .....  7390.623993: xe_guc_ctb_h2g: H2G CTB: 
dev=0000:00:02.0, gt0: action=0x3003, len=5, tail=5, head=0

This confirms that indeed the device went to sleep after after sending 
TLB inval.


Regards,

Nirmo

>   Any chance you want to try adding some useful
> ftrace points to get full visibility to suspend / resume flows?
>   
>>
>>   drivers/gpu/drm/xe/xe_vm.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index b6932cc98ff9..241b7ea00d5f 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2700,6 +2700,7 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>>          struct dma_fence *fence;
>>          int err;
>>
>> +       xe_pm_runtime_get(vm->xe);
>>
>> While I agree the device shouldn't enter suspend while TLB invalidations
>> are inflight I don't think this patch will help with this.
>>
>> This code path is called in various places in where we should have PM
>> ref (VM bind IOCTL, exec IOCTL for rebind, or preempt rebind worker). If
>> we don't have PM ref when this function is called, that is a bug that
>> needs to be fixed at the outer most layers. Beyond that, GT TLB
>> invalidations are async and pipelined (e.g. they can be sent after this
>> function returns and completion can returns sometime later).
>>
>> With this, I believe correct place to fix this is either in the CT layer
>> or perhaps hook into GT TLB invalidation fence (Arming of fence
>> takes a ref, signaling of fence drops a ref).
>>
>>     I was planning to send something more simple:
>>
>>     send_tlb_invalidation() -->   xe_pm_runtime_get(xe);
>>
>>     xe_gt_tlb_fence_timeout() --> xe_pm_runtime_put(xe);
>>
>>     __invalidation_fence_signal() --> xe_pm_runtime_put(xe);
> The problem with this fences are currently used everywhere in the
> current code base so we'd have an imbalance. This changes that [2] but
> even with that __invalidation_fence_signal wouldn't be used in same
> places. Thus building it directly into the fence would make sense to me.
> Their are concerns about fences signaling in IRQ contexts though. I've
> pinged Rodrigo about this off the list, let's see what he thinks.
>
> Matt
>
> [2] https://patchwork.freedesktop.org/patch/602562/?series=135809&rev=2
>
>>
>>     But that seemed too low layer for power mgmt calls. But if TLB inval is
>>     pipelined then I agree we have to stick to a
>>
>>     lower layer to fix this but probably not down to CT layer.
>>
>>   If we choose the latter
>> option I think following series will help as we will use GT TLB
>> invalidation fences everywhere for waits [1]/
>>
>>     Regards,
>>
>>     Nirmoy
>>
>>
>> Rodrigo - I know we had talked about something like above but it doesn't
>> appear this has gotten implemented. WIP or did this get lost in the PM
>> work?
>>
>> Matt
>>
>> [1] [3]https://patchwork.freedesktop.org/series/135809/
>>
>>
>>          lockdep_assert_held_write(&vm->lock);
>>
>>          drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT |
>> @@ -2721,6 +2722,7 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>>
>>   unlock:
>>          drm_exec_fini(&exec);
>> +       xe_pm_runtime_put(vm->xe);
>>          return err;
>>   }
>>
>> --
>> 2.42.0
>>
>> References
>>
>>     1. mailto:matthew.brost at intel.com
>>     2. mailto:nirmoy.das at intel.com
>>     3. https://patchwork.freedesktop.org/series/135809/


More information about the Intel-xe mailing list