[Intel-gfx] [RFC] drm/i915: Don't reset on preemptible workloads
Chris Wilson
chris at chris-wilson.co.uk
Wed Aug 1 14:22:08 UTC 2018
Quoting Jakub Bartmiński (2018-08-01 14:56:11)
> The current behaviour of the hangcheck is that if we detect that a request
> is not making any forward progress, the driver will attempt the engine
> reset. If that's not successful, we fall back to a full device reset.
>
> This patch would change it so that if hangcheck encounters a low-priority
> workload, it will attempt to preempt it before declaring a hang. If the
> preemption is successful, we allow the workload to continue "in background"
> (until the next hangcheck run, and the next attempt to preempt it). If the
> context was closed, we're simply skipping the workload's execution.
>
> This new behaviour would allow the user to define intentionally large or
> passive workloads, that would normally be affected by the hangcheck,
> without having to divide them into smaller work.
>
> Suggested-by: Michał Winiarski <michal.winiarski at intel.com>
> Signed-off-by: Jakub Bartmiński <jakub.bartminski at intel.com>
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at linux.intel.com>
> ---
> drivers/gpu/drm/i915/intel_hangcheck.c | 29 +++++++++++++++++++
> drivers/gpu/drm/i915/intel_lrc.c | 37 +++++++++++++++++++++++--
> drivers/gpu/drm/i915/intel_lrc.h | 1 +
> drivers/gpu/drm/i915/intel_ringbuffer.h | 1 +
> 4 files changed, 65 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
> index 2fc7a0dd0df9..5ebd3ca74855 100644
> --- a/drivers/gpu/drm/i915/intel_hangcheck.c
> +++ b/drivers/gpu/drm/i915/intel_hangcheck.c
> @@ -398,6 +398,28 @@ static void hangcheck_declare_hang(struct drm_i915_private *i915,
> return i915_handle_error(i915, hung, I915_ERROR_CAPTURE, "%s", msg);
> }
>
> +static bool hangcheck_preempt_workload(struct intel_engine_cs *engine)
> +{
> + struct i915_request *active_request;
> + int workload_priority;
> +
> + /* We have already tried preempting, but the hardware did not react */
> + if (engine->hangcheck.try_preempt)
> + return false;
> +
> + active_request = i915_gem_find_active_request(engine);
No ownership here of rq. You either need a new function to return a
reference, or careful application of rcu.
> + workload_priority = active_request->gem_context->sched.priority;
> +
> + if (workload_priority == I915_CONTEXT_MIN_USER_PRIORITY) {
> + engine->hangcheck.try_preempt = true;
> + engine->hangcheck.active_request = active_request;
> + intel_lr_inject_preempt_context(engine);
> + return true;
> + }
> +
> + return false;
> +}
> +
> /*
> * This is called when the chip hasn't reported back with completed
> * batchbuffers in a long time. We keep track per ring seqno progress and
> @@ -440,6 +462,13 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
> hangcheck_store_sample(engine, &hc);
>
> if (engine->hangcheck.stalled) {
> + /*
> + * Try preempting the current workload before
> + * declaring the engine hung.
> + */
> + if (hangcheck_preempt_workload(engine))
> + continue;
> +
> hung |= intel_engine_flag(engine);
> if (hc.action != ENGINE_DEAD)
> stuck |= intel_engine_flag(engine);
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index fad689efb67a..3ec8dcf64000 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -326,15 +326,39 @@ static void __unwind_incomplete_requests(struct intel_engine_cs *engine)
> {
> struct i915_request *rq, *rn;
> struct i915_priolist *uninitialized_var(p);
> + struct i915_gem_context *active_context = NULL;
> + bool skip_seqno = false;
> + u32 new_seqno = 0;
> int last_prio = I915_PRIORITY_INVALID;
>
> lockdep_assert_held(&engine->timeline.lock);
>
> + if (engine->hangcheck.try_preempt) {
> + rq = engine->hangcheck.active_request;
> + GEM_BUG_ON(!rq);
> +
> + active_context = rq->gem_context;
> + GEM_BUG_ON(!active_context);
> +
> + /*
> + * If the workload is preemptible but its context was closed
> + * we force the engine to skip its execution instead.
> + */
> + if (i915_gem_context_is_closed(active_context))
> + skip_seqno = true;
Nope. Even if the context is closed, its results may be depended upon by
others; both GPU and userspace.
Ok, I don't see any other use of try_preempt...
So the gist of this is to excuse lowest priority user contexts from
hangcheck, do we want to go one step further and ask userspace to opt in
explicitly? Though the argument that as long as its preemptible (and
isolated) it is not acting as DoS so we don't need to enforce a timeout.
That also suggests that we need not reset until the request is blocking
others or userspace. As discussed that seems reasonable, but I think this
does need to actually trigger preemption.
-Chris
More information about the Intel-gfx
mailing list