<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 28.05.2025 22:02, Michał Winiarski
wrote:<br>
</div>
<blockquote type="cite" cite="mid:75aq2gastxnu4l5c3izbvniqzpwyfk45ux5oedfrwg3ir4kh3h@fw5zpvvek36d">
<pre wrap="" class="moz-quote-pre">On Tue, May 20, 2025 at 01:19:22AM +0200, Tomasz Lis wrote:
</pre>
<blockquote type="cite">
<pre wrap="" class="moz-quote-pre">Resetting GuC during recovery could interfere with the recovery
process. Such reset might be also triggered without justification,
due to migration taking time, rather than due to the workload not
progressing.
Doing GuC reset during the recovery would cause exit of RESFIX state,
and therefore continuation of GuC work while fixups are still being
applied. To avoid that, reset needs to be blocked during the recovery.
This patch blocks the reset during recovery. Reset request in that
time range will be dropped.
In case a reset procedure already started while the recovery is
triggered, there isn't much we can do - we cannot wait for it to
finish as it involves waiting for hardware, and we can't be sure
at which exact point of the reset procedure the GPU got switched.
Therefore, the rare cases where migration happens while reset is
in progress, are still dangerous. Resets are not a part of the
standard flow, and cause unfinished workloads - that will happen
during the reset interrupted by migration as well, so it doesn't
diverge that much from what normally happens during such resets.
Signed-off-by: Tomasz Lis <a class="moz-txt-link-rfc2396E" href="mailto:tomasz.lis@intel.com"><tomasz.lis@intel.com></a>
Cc: Michal Wajdeczko <a class="moz-txt-link-rfc2396E" href="mailto:michal.wajdeczko@intel.com"><michal.wajdeczko@intel.com></a>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 26 ++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_guc_submit.h | 2 ++
drivers/gpu/drm/xe/xe_sriov_vf.c | 12 ++++++++++--
3 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 6f280333de13..69ccfb2e1cff 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1761,7 +1761,11 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
}
}
-int xe_guc_submit_reset_prepare(struct xe_guc *guc)
+/**
+ * xe_guc_submit_reset_block - Disallow reset calls on given GuC.
+ * @guc: the &xe_guc struct instance
+ */
+int xe_guc_submit_reset_block(struct xe_guc *guc)
{
int ret;
@@ -1774,6 +1778,24 @@ int xe_guc_submit_reset_prepare(struct xe_guc *guc)
*/
ret = atomic_fetch_or(1, &guc->submission_state.stopped);
smp_wmb();
+
+ return ret;
+}
+
+/**
+ * xe_guc_submit_reset_unblock - Allow back reset calls on given GuC.
+ * @guc: the &xe_guc struct instance
+ */
+void xe_guc_submit_reset_unblock(struct xe_guc *guc)
+{
+ atomic_dec(&guc->submission_state.stopped);
+}
+
+int xe_guc_submit_reset_prepare(struct xe_guc *guc)
+{
+ int ret;
+
+ ret = xe_guc_submit_reset_block(guc);
wake_up_all(&guc->ct.wq);
return ret;
@@ -1849,7 +1871,7 @@ int xe_guc_submit_start(struct xe_guc *guc)
xe_gt_assert(guc_to_gt(guc), xe_guc_read_stopped(guc) == 1);
mutex_lock(&guc->submission_state.lock);
- atomic_dec(&guc->submission_state.stopped);
+ xe_guc_submit_reset_unblock(guc);
xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) {
/* Prevent redundant attempts to start parallel queues */
if (q->guc->id != index)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
index f1cf271492ae..2c2d2936440d 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit.h
@@ -20,6 +20,8 @@ void xe_guc_submit_stop(struct xe_guc *guc);
int xe_guc_submit_start(struct xe_guc *guc);
void xe_guc_submit_pause(struct xe_guc *guc);
void xe_guc_submit_unpause(struct xe_guc *guc);
+int xe_guc_submit_reset_block(struct xe_guc *guc);
+void xe_guc_submit_reset_unblock(struct xe_guc *guc);
void xe_guc_submit_wedge(struct xe_guc *guc);
int xe_guc_read_stopped(struct xe_guc *guc);
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf.c b/drivers/gpu/drm/xe/xe_sriov_vf.c
index fcd82a0fda48..82b3dd57de73 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf.c
+++ b/drivers/gpu/drm/xe/xe_sriov_vf.c
@@ -150,9 +150,15 @@ static void vf_post_migration_shutdown(struct xe_device *xe)
{
struct xe_gt *gt;
unsigned int id;
+ int ret = 0;
- for_each_gt(gt, xe, id)
+ for_each_gt(gt, xe, id) {
xe_guc_submit_pause(>->uc.guc);
+ ret |= xe_guc_submit_reset_block(>->uc.guc);
+ }
+
+ if (ret)
+ drm_info(&xe->drm, "migration recovery encountered ongoing reset\n");
</pre>
</blockquote>
<pre wrap="" class="moz-quote-pre">I'd suggest debug level, as it doesn't seem all that useful in general.
But I guess you want to have a trace that this happened and the handling
of this scenario is somewhat iffy...</pre>
</blockquote>
<p>Yes, reset and post-migration recovery interfere, leading to
dangerous corner cases. If they meet, it's better if we know that
from the log.</p>
<p>Reset is a rare, exceptional procedure, so we don't expect this
to be logged much.<br>
</p>
<blockquote type="cite" cite="mid:75aq2gastxnu4l5c3izbvniqzpwyfk45ux5oedfrwg3ir4kh3h@fw5zpvvek36d">
<blockquote type="cite">
<pre wrap="" class="moz-quote-pre"> }
/**
@@ -170,8 +176,10 @@ static void vf_post_migration_kickstart(struct xe_device *xe)
/* make sure interrupts on the new HW are properly set */
xe_irq_resume(xe);
- for_each_gt(gt, xe, id)
+ for_each_gt(gt, xe, id) {
+ xe_guc_submit_reset_unblock(>->uc.guc);
</pre>
</blockquote>
<pre wrap="" class="moz-quote-pre">If we were doing migration recovery during reset, we'll change the state
of guc->submission_state.stopped (btw... is it possible to decrement it
below 0 now?).</pre>
</blockquote>
Yes, the decrement should be fixed. Not sure why
xe_guc_submit_reset_block() doesn't do that by itself, maybe it was
just a simplification due to it being used only during reset. But -
the further analysis below makes it irrelevant.<br>
<blockquote type="cite" cite="mid:75aq2gastxnu4l5c3izbvniqzpwyfk45ux5oedfrwg3ir4kh3h@fw5zpvvek36d">
<pre wrap="" class="moz-quote-pre">There are "some" places where we have asserts that expect a particular
state, and some places where we use that state as an event (combined
with a waitqueue). Should we wake up the waiters at some point? And
perhaps synchronize with the places which depend on being in a "stopped"
state (if at all possible)?</pre>
</blockquote>
<p>After getting an understanding of all the wait places, it looks
to me that my implementation is just wrong.</p>
<p>We can not re-use the `xe_guc_submit_reset_block<span style="white-space: pre-wrap"><span style="white-space: normal">()` here, regardless if async or not, because this leads to all</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">waits for GuC responses being terminated, usually with an error. This is not acceptable as, in case of migration,</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">related responses from GuC will come as soon as the recovery is done.</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">Even if we won't wake up the waits within migration recovery, the check may still be triggered somewhere else,</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">or a new wait call may happen and bail out of wait immediately, leaving us with detached G2H response, or maybe</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">even freeing an object while HW still accesses it (ie. a preempted queue may be sent back to HW by GuC on</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">RESFIX_DONE before GuC reads all pending G2H commands, the context becomes active but on KMD side it</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">bailed out from waiting to be disabled and got freed).
</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">I will figure out a better way to block the reset, one which doesn't mess with existing waits.</span></span></p>
<p><span style="white-space: pre-wrap"><span style="white-space: normal">-Tomasz
</span></span></p>
<blockquote type="cite" cite="mid:75aq2gastxnu4l5c3izbvniqzpwyfk45ux5oedfrwg3ir4kh3h@fw5zpvvek36d">
<pre wrap="" class="moz-quote-pre">Thanks,
-Michał
</pre>
<blockquote type="cite">
<pre wrap="" class="moz-quote-pre"> xe_guc_submit_unpause(>->uc.guc);
+ }
}
/**
--
2.25.1
</pre>
</blockquote>
</blockquote>
</body>
</html>