Re: ✗ Xe.CI.Full: failure for PF: Mitigate unexpected GuC VF config checks

Michal Wajdeczko michal.wajdeczko at intel.com
Thu Jan 30 14:41:10 UTC 2025



On 30.01.2025 05:32, Patchwork wrote:
> == Series Details ==
> 
> Series: PF: Mitigate unexpected GuC VF config checks
> URL   : https://patchwork.freedesktop.org/series/144120/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from xe-2571-45746920fb499b7ad35883c94ebfbedda6c40139_full -> xe-pw-144120v1_full
> ====================================================
> 
> Summary
> -------
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with xe-pw-144120v1_full absolutely need to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in xe-pw-144120v1_full, please notify your bug team (I915-ci-infra at lists.freedesktop.org) to allow them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   
> 
> Participating hosts (4 -> 4)
> ------------------------------
> 
>   No changes in participating hosts
> 
> Possible new issues
> -------------------
> 
>   Here are the unknown changes that may have been introduced in xe-pw-144120v1_full:
> 
> ### IGT changes ###
> 
> #### Possible regressions ####
> 
>   * igt at core_hotunplug@hotrebind-lateclose:
>     - shard-dg2-set2:     [PASS][1] -> [ABORT][2]
>    [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2571-45746920fb499b7ad35883c94ebfbedda6c40139/shard-dg2-463/igt@core_hotunplug@hotrebind-lateclose.html
>    [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-144120v1/shard-dg2-463/igt@core_hotunplug@hotrebind-lateclose.html

unrelated, from non-PF run, from the audio/sound:

<4> [201.700469] refcount_t: addition on 0; use-after-free.
<4> [201.700485] WARNING: CPU: 6 PID: 548 at lib/refcount.c:25
refcount_warn_saturate+0x12e/0x150
<4> [201.700645] Workqueue: pm pm_runtime_work
<4> [201.700653] RIP: 0010:refcount_warn_saturate+0x12e/0x150
<4> [201.700690] Call Trace:
<4> [201.700692]  <TASK>
<4> [201.700695]  ? show_regs+0x6c/0x80
<4> [201.700701]  ? __warn+0x93/0x1c0
<4> [201.700707]  ? refcount_warn_saturate+0x12e/0x150
<4> [201.700713]  ? report_bug+0x182/0x1b0
<4> [201.700720]  ? handle_bug+0x6e/0xb0
<4> [201.700725]  ? exc_invalid_op+0x18/0x80
<4> [201.700730]  ? asm_exc_invalid_op+0x1b/0x20
<4> [201.700741]  ? refcount_warn_saturate+0x12e/0x150
<4> [201.700747]  ? refcount_warn_saturate+0x12e/0x150
<4> [201.700752]  kobject_get+0x7c/0x80
<4> [201.700758]  get_device+0x13/0x30
<4> [201.700763]  snd_jack_report+0xa0/0x220 [snd]
<4> [201.700779]  ? __pfx_hda_codec_runtime_resume+0x10/0x10 [snd_hda_codec]
<4> [201.700797]  snd_hda_jack_report_sync+0x8e/0xe0 [snd_hda_codec]
<4> [201.700815]  hda_call_codec_resume+0x139/0x160 [snd_hda_codec]
<4> [201.700829]  ? __pfx_hda_codec_runtime_resume+0x10/0x10 [snd_hda_codec]
<4> [201.700843]  hda_codec_runtime_resume+0x30/0x80 [snd_hda_codec]
<4> [201.700856]  __rpm_callback+0x4d/0x170
<4> [201.700861]  ? ktime_get_mono_fast_ns+0x39/0xd0
<4> [201.700867]  ? __pfx_hda_codec_runtime_resume+0x10/0x10 [snd_hda_codec]
<4> [201.700881]  rpm_callback+0x64/0x70
<4> [201.700886]  ? __pfx_hda_codec_runtime_resume+0x10/0x10 [snd_hda_codec]
<4> [201.700899]  rpm_resume+0x594/0x790
<4> [201.700908]  pm_runtime_work+0x8c/0xf0
<4> [201.700914]  process_one_work+0x21c/0x740
<4> [201.700924]  worker_thread+0x1db/0x3c0

> 
>   * igt at kms_plane_scaling@intel-max-src-size at pipe-a-hdmi-a-6:
>     - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][3] +1 other test dmesg-warn
>    [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-144120v1/shard-dg2-436/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-6.html

unrelated, from non-PF run, from KMS:

<4> [184.185338] xe 0000:03:00.0: [drm] vblank wait timed out on crtc 0
<4> [184.185376] WARNING: CPU: 12 PID: 5724 at
drivers/gpu/drm/drm_vblank.c:1307 drm_wait_one_vblank+0x1fe/0x220


> 
>   * igt at kms_sequence@queue-busy at pipe-c-dp-4:
>     - shard-dg2-set2:     NOTRUN -> [INCOMPLETE][4]
>    [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-144120v1/shard-dg2-436/igt@kms_sequence@queue-busy@pipe-c-dp-4.html

unrelated, from non-PF run, another CI glitch ?

<7>[  253.529989] [IGT] kms_sequence: finished subtest pipe-C-DP-4, SUCCESS
<7>[  258.475150] [IGT] kms_sequence: exiting, ret=0

> 
>   * igt at xe_pm@d3hot-mmap-vram:
>     - shard-dg2-set2:     [PASS][5] -> [FAIL][6]
>    [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2571-45746920fb499b7ad35883c94ebfbedda6c40139/shard-dg2-463/igt@xe_pm@d3hot-mmap-vram.html
>    [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-144120v1/shard-dg2-463/igt@xe_pm@d3hot-mmap-vram.html

must be unrelated, from non-PF run,
btw, there are dmesg logs from DG2(native) and ADLP(PF-mode) and both
are suspending/resuming

Starting subtest: d3hot-mmap-vram
(xe_pm:3238) CRITICAL: Test assertion failure function test_mmap, file
../tests/intel/xe_pm.c:687:
(xe_pm:3238) CRITICAL: Failed assertion: in_d3(device, d_state)
(xe_pm:3238) CRITICAL: Last errno: 2, No such file or directory
Subtest d3hot-mmap-vram failed.




More information about the Intel-xe mailing list