[PATCH 3/3] drm/xe/guc_submit: fix race around suspend_pending
Matthew Brost
matthew.brost at intel.com
Mon Nov 25 23:42:37 UTC 2024
On Fri, Nov 22, 2024 at 04:19:17PM +0000, Matthew Auld wrote:
> Currently in some testcases we can trigger:
>
> xe 0000:03:00.0: [drm] Assertion `exec_queue_destroyed(q)` failed!
> ....
> WARNING: CPU: 18 PID: 2640 at drivers/gpu/drm/xe/xe_guc_submit.c:1826 xe_guc_sched_done_handler+0xa54/0xef0 [xe]
> xe 0000:03:00.0: [drm] *ERROR* GT1: DEREGISTER_DONE: Unexpected engine state 0x00a1, guc_id=57
>
> Looking at a snippet of corresponding ftrace for this GuC id we can see:
>
> 162.673311: xe_sched_msg_add: dev=0000:03:00.0, gt=1 guc_id=57, opcode=3
> 162.673317: xe_sched_msg_recv: dev=0000:03:00.0, gt=1 guc_id=57, opcode=3
> 162.673319: xe_exec_queue_scheduling_disable: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0x29, flags=0x0
> 162.674089: xe_exec_queue_kill: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0x29, flags=0x0
> 162.674108: xe_exec_queue_close: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa9, flags=0x0
> 162.674488: xe_exec_queue_scheduling_done: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa9, flags=0x0
> 162.678452: xe_exec_queue_deregister: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa1, flags=0x0
>
> It looks like we try to suspend the queue (opcode=3), setting
> suspend_pending and triggering a disable_scheduling. The user then
> closes the queue. However closing the queue seems to forcefully signal
> the fence after killing the queue, however when the G2H response for
> disable_scheduling comes back we have now cleared suspend_pending when
> signalling the suspend fence, so the disable_scheduling now incorrectly
> tries to also deregister the queue, leading to warnings since the queue
> has yet to even be marked for destruction. We also seem to trigger
> errors later with trying to double unregister the same queue.
>
> To fix this tweak the ordering when handling the response to ensure we
> don't race with a disable_scheduling that doesn't actually intend to
> actually unregister. The destruction path should now also correctly
> wait for any pending_disable before marking as destroyed.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/3371
> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> Cc: Matthew Brost <matthew.brost at intel.com>
Also spotted this one when working on UMD submission. Same comment as
previous patch, this looks correct but mayb longterm a bit more cleanup
in GuC backend would be a good idea.
Reviewed-by: Matthew Brost <matthew.brost at intel.com>
> Cc: <stable at vger.kernel.org> # v6.8+
> ---
> drivers/gpu/drm/xe/xe_guc_submit.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> index f3c22b101916..f82f286fd431 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -1867,16 +1867,30 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
> xe_gt_assert(guc_to_gt(guc), runnable_state == 0);
> xe_gt_assert(guc_to_gt(guc), exec_queue_pending_disable(q));
>
> - clear_exec_queue_pending_disable(q);
> if (q->guc->suspend_pending) {
> suspend_fence_signal(q);
> + clear_exec_queue_pending_disable(q);
> } else {
> if (exec_queue_banned(q) || check_timeout) {
> smp_wmb();
> wake_up_all(&guc->ct.wq);
> }
> - if (!check_timeout)
> + if (!check_timeout && exec_queue_destroyed(q)) {
> + /*
> + * Make sure we clear the pending_disable only
> + * after the sampling the destroyed state. We
> + * want to ensure we don't trigger the
> + * unregister too early with something only
> + * intending to only disable scheduling. The
> + * caller doing the destroy must wait for an
> + * ongoing pending_destroy before marking as
> + * destroyed.
> + */
> + clear_exec_queue_pending_disable(q);
> deregister_exec_queue(guc, q);
> + } else {
> + clear_exec_queue_pending_disable(q);
> + }
> }
> }
> }
> --
> 2.47.0
>
More information about the Intel-xe
mailing list