[PATCH] drm/xe: Get XE_FORCEWAKE_ALL to read RING_TIMESTAMP

Umesh Nerlige Ramappa umesh.nerlige.ramappa at intel.com
Tue Jun 25 02:42:19 UTC 2024


On Mon, Jun 24, 2024 at 05:43:11PM -0700, Umesh Nerlige Ramappa wrote:
>On Mon, Jun 24, 2024 at 09:44:03AM -0500, Lucas De Marchi wrote:
>>On Sat, Jun 22, 2024 at 03:23:25AM GMT, Umesh Nerlige Ramappa wrote:
>>>Per client engine utilization uses RING_TIMESTAMP to return
>>>drm-total-cycles to the user. We read only the rcs RING_TIMESTAMP in
>>>platforms that have render. While this works for rcs and
>>>ccs, it is observed that this value is 0 when running work on bcs, vcs
>>>and vecs. Ideally we should read the engine specific RING_TIMESTAMP, but
>>
>>what do you mean by engine specific RING_TIMESTAMP? We are already
>>reading the engine specific copy of the same timestamp.
>
>We read RING_TIMESTAMP from the first available engine which in my 
>case was rcs (DG2).
>
>>AFAIR just getting the gt fw was working for reading this register 
>>from vcs, vecs, bcs.
>
>Well, issue could be DG2 specific, haven't tried other platforms. 
>Which platforms did you run it on? I will try to use the same.
>
>>Can you demonstrate the failure with a bug open or a series of
>>intel_reg invocations showing otherwise?
>
>This was found while running tests in this series, so I don't see any 
>open bugs.
>
>https://patchwork.freedesktop.org/series/135204/
>
>xe_drm_fdinfo --r drm-busy-idle --debug

Overlooked this earlier, looks like CI ran it already, so you can see 
the bug here (bcs total cycles are 0).

https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11297/shard-dg2-432/igt@xe_drm_fdinfo@drm-busy-idle.html

Regards,
Umesh

>
>>
>>I think the simplest way to show it doesn't work is to call
>>`intel_reg write` to get a GT forcewake, then `intel_reg read` to read
>>this register.
>
>Don't know how to use this on Xe, I get a sigsegv.
>
>sudo ./intel_reg write 0x2358 0x00c0ffee
>Received signal SIGSEGV.
>Stack trace:
> #0 [fatal_sig_handler+0x175]
> #1 [__sigaction+0x50]
> #2 [intel_reg_write+0x589]
> #3 [main+0x339]
> #4 [__libc_init_first+0x90]
> #5 [__libc_start_main+0x80]
> #6 [_start+0x25]
>Segmentation fault (core dumped)
>
>Regards,
>Umesh
>
>>
>>Lucas De Marchi
>>
>>>to keep the logic simple, just get XE_FORCEWAKE_ALL instead of XE_GT_FW.
>>>
>>>This should work fine on multi-gt platforms as well since the
>>>gt_timestamp is in sync on all GTs.
>>>
>>>Fixes: 188ced1e0ff8 ("drm/xe/client: Print runtime to fdinfo")
>>>Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
>>>---
>>>drivers/gpu/drm/xe/xe_drm_client.c | 4 ++--
>>>1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>>diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
>>>index 4a19b771e3a0..74f2244679f3 100644
>>>--- a/drivers/gpu/drm/xe/xe_drm_client.c
>>>+++ b/drivers/gpu/drm/xe/xe_drm_client.c
>>>@@ -264,9 +264,9 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
>>>		if (!hwe)
>>>			continue;
>>>
>>>-		xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>>>+		xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>>>		gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
>>>-		xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>>>+		xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>>>		break;
>>>	}
>>>
>>>-- 
>>>2.34.1
>>>


More information about the Intel-xe mailing list