[PATCH v2] drm/xe/sched: stop re-submitting signalled jobs
Matthew Brost
matthew.brost at intel.com
Tue May 27 18:20:50 UTC 2025
On Tue, May 27, 2025 at 11:20:00AM +0100, Matthew Auld wrote:
> Customer is reporting a really subtle issue where we get random DMAR
> faults, hangs and other nasties for kernel migration jobs when stressing
> stuff like s2idle/s3/s4. The explosions seems to happen somewhere
> after resuming the system with splats looking something like:
>
> PM: suspend exit
> rfkill: input handler disabled
> xe 0000:00:02.0: [drm] GT0: Engine reset: engine_class=bcs, logical_mask: 0x2, guc_id=0
> xe 0000:00:02.0: [drm] GT0: Timedout job: seqno=24496, lrc_seqno=24496, guc_id=0, flags=0x13 in no process [-1]
> xe 0000:00:02.0: [drm] GT0: Kernel-submitted job timed out
>
> The likely cause appears to be a race between suspend cancelling the
> worker that processes the free_job()'s, such that we still have pending
> jobs to be freed after the cancel. Following from this, on resume the
> pending_list will now contain at least one already complete job, but it
> looks like we call drm_sched_resubmit_jobs(), which will then call
> run_job() on everything still on the pending_list. But if the job was
> already complete, then all the resources tied to the job, like the bb
> itself, any memory that is being accessed, the iommu mappings etc. might
> be long gone since those are usually tied to the fence signalling.
>
> This scenario can be seen in ftrace when running a slightly modified
> xe_pm (kernel was only modified to inject artificial latency into
> free_job to make the race easier to hit):
>
> xe_sched_job_run: dev=0000:00:02.0, fence=0xffff888276cc8540, seqno=0, lrc_seqno=0, gt=0, guc_id=0, batch_addr=0x000000146910 ...
> xe_exec_queue_stop: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0, guc_state=0x0, flags=0x13
> xe_exec_queue_stop: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=1, guc_state=0x0, flags=0x4
> xe_exec_queue_stop: dev=0000:00:02.0, 4:0x1, gt=1, width=1, guc_id=0, guc_state=0x0, flags=0x3
> xe_exec_queue_stop: dev=0000:00:02.0, 1:0x1, gt=1, width=1, guc_id=1, guc_state=0x0, flags=0x3
> xe_exec_queue_stop: dev=0000:00:02.0, 4:0x1, gt=1, width=1, guc_id=2, guc_state=0x0, flags=0x3
> xe_exec_queue_resubmit: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0, guc_state=0x0, flags=0x13
> xe_sched_job_run: dev=0000:00:02.0, fence=0xffff888276cc8540, seqno=0, lrc_seqno=0, gt=0, guc_id=0, batch_addr=0x000000146910 ...
> .....
> xe_exec_queue_memory_cat_error: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0, guc_state=0x3, flags=0x13
>
> So the job_run() is clearly triggered twice for the same job, even
> though the first must have already signalled to completion during
> suspend. We can also see a CAT error after the re-submit.
>
> To prevent this try to call xe_sched_stop() to forcefully remove
> anything on the pending_list that has already signalled, before we
> re-submit.
>
> v2:
> - Make sure to re-arm the fence callbacks with sched_start().
>
> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/4856
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> Cc: Matthew Brost <matthew.brost at intel.com>
> Cc: William Tseng <william.tseng at intel.com>
> Cc: <stable at vger.kernel.org> # v6.8+
> ---
> drivers/gpu/drm/xe/xe_gpu_scheduler.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> index c250ea773491..0c8fe0461df9 100644
> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> @@ -51,7 +51,9 @@ static inline void xe_sched_tdr_queue_imm(struct xe_gpu_scheduler *sched)
>
> static inline void xe_sched_resubmit_jobs(struct xe_gpu_scheduler *sched)
> {
> + drm_sched_stop(&sched->base, NULL); /* remove completed jobs */
> drm_sched_resubmit_jobs(&sched->base);
drm_sched_resubmit_jobs is deprecated. IIRC it was suggested we should
replace drm_sched_resubmit_jobs with a loop which calls run_job on the
driver side. Can we do that and avoid calling run_job on jobs with
their fence signaled? I think that would preferred here.
Matt
> + drm_sched_start(&sched->base, 0); /* re-add fence callback for pending jobs */
> }
>
> static inline bool
> --
> 2.49.0
>
More information about the Intel-xe
mailing list