[Intel-gfx] [PATCH 3/4] drm/i915/guc: Look for a guilty context when an engine reset fails
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Thu Jan 12 10:15:12 UTC 2023
On 12/01/2023 02:53, John.C.Harrison at Intel.com wrote:
> From: John Harrison <John.C.Harrison at Intel.com>
>
> Engine resets are supposed to never fail. But in the case when one
> does (due to unknown reasons that normally come down to a missing
> w/a), it is useful to get as much information out of the system as
> possible. Given that the GuC effectively dies on such a situation, it
> is not possible to get a guilty context notification back. So do a
> manual search instead. Given that GuC is dead, this is safe because
> GuC won't be changing the engine state asynchronously.
>
> Signed-off-by: John Harrison <John.C.Harrison at Intel.com>
> ---
> .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index b436dd7f12e42..99d09e3394597 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -4754,11 +4754,24 @@ static void reset_fail_worker_func(struct work_struct *w)
> guc->submission_state.reset_fail_mask = 0;
> spin_unlock_irqrestore(&guc->submission_state.lock, flags);
>
> - if (likely(reset_fail_mask))
> + if (likely(reset_fail_mask)) {
> + struct intel_engine_cs *engine;
> + enum intel_engine_id id;
> +
> + /*
> + * GuC is toast at this point - it dead loops after sending the failed
> + * reset notification. So need to manually determine the guilty context.
> + * Note that it should be safe/reliable to do this here because the GuC
> + * is toast and will not be scheduling behind the KMD's back.
> + */
> + for_each_engine_masked(engine, gt, reset_fail_mask, id)
> + intel_guc_find_hung_context(engine);
> +
> intel_gt_handle_error(gt, reset_fail_mask,
> I915_ERROR_CAPTURE,
> - "GuC failed to reset engine mask=0x%x\n",
> + "GuC failed to reset engine mask=0x%x",
> reset_fail_mask);
> + }
> }
>
> int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
This one I don't feel "at home" enough to r-b. Just a question - can we
be sure at this point that GuC is 100% stuck and there isn't a chance it
somehow comes alive and starts running in parallel (being driven in
parallel by a different "thread" in i915), interfering with the
assumption made in the comment?
Regards,
Tvrtko
More information about the dri-devel
mailing list