[PATCH i-g-t] tests/intel/xe_drm_fdinfo: Extend mercy to the upper end
Lucas De Marchi
lucas.demarchi at intel.com
Sat Aug 24 21:24:37 UTC 2024
When we are processing the fdinfo of each client, the gpu time is read
first, and then later all the exec queues are accumulated. It's thus
possible that the total gpu time is smaller than the time reported in
the exec queues. A preemption in the middle of second sample would
exaggerate the problem:
total_cycles cycles
s1: read gpu time |
s1: read exec queues times | *
| *
.. | *
| *
s2: read gpu time | *
-> prempted *
... *
s2: read exec queues times *
In this situation, using the line numbers:
total_cyles == 6
cycles == 8
In a more realistic situation, as reported in CI:
(xe_drm_fdinfo:1072) DEBUG: rcs: sample 1: cycles 29223333, total_cycles 5801623069
(xe_drm_fdinfo:1072) DEBUG: rcs: sample 2: cycles 38974256, total_cycles 5811276365
(xe_drm_fdinfo:1072) DEBUG: rcs: percent: 101.000000
Extend the same mercy to the upper end as we did to the lower end.
Long term we may move some of these tests to just check cycles by using
a timed spinner and stop relying on the CPU sleep time and timing on the
fdinfo file read. See commit 01214d73c209 ("lib/xe_spin: fixed duration
xe_spin capability").
Also, this matches the tolerance applied on the i915 side in
tests/intel/drm_fdinfo.c:__assert_within_epsilon().
Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
---
tests/intel/xe_drm_fdinfo.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 4696c6495..e3a99a2dc 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -484,7 +484,7 @@ check_results(struct pceu_cycles *s1, struct pceu_cycles *s2,
igt_debug("%s: percent: %f\n", engine_map[class], percent);
if (flags & TEST_BUSY)
- igt_assert(percent >= 95 && percent <= 100);
+ igt_assert(percent >= 95 && percent <= 105);
else
igt_assert(!percent);
}
--
2.43.0
More information about the igt-dev
mailing list