[Intel-gfx] [PATCH 10/34] drm/i915: Remove GPU reset dependence on struct_mutex

Chris Wilson chris at chris-wilson.co.uk
Thu Jan 24 12:50:30 UTC 2019


Quoting Mika Kuoppala (2019-01-24 12:06:01)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> > +     mutex_lock(&i915->drm.struct_mutex);
> >       i915_gem_contexts_lost(i915);
> >       mutex_unlock(&i915->drm.struct_mutex);
> >  }
> > @@ -4534,6 +4528,8 @@ int i915_gem_suspend(struct drm_i915_private *i915)
> >       wakeref = intel_runtime_pm_get(i915);
> >       intel_suspend_gt_powersave(i915);
> >  
> > +     flush_workqueue(i915->wq);
> 
> I don't know what is happening here. Why
> don't we need the i915_gem_drain_workqueue in here?

It's just a poke at the queue before we end up doing the same work
ourselves.

> >       mutex_lock(&i915->drm.struct_mutex);
> >  
> >       /*
> > @@ -4563,11 +4559,9 @@ int i915_gem_suspend(struct drm_i915_private *i915)
> >       i915_retire_requests(i915); /* ensure we flush after wedging */
> >  
> >       mutex_unlock(&i915->drm.struct_mutex);
> > +     i915_reset_flush(i915);
> >  
> > -     intel_uc_suspend(i915);
> > -
> > -     cancel_delayed_work_sync(&i915->gpu_error.hangcheck_work);
> > -     cancel_delayed_work_sync(&i915->gt.retire_work);
> > +     drain_delayed_work(&i915->gt.retire_work);
> 
> Hangcheck is inside reset flush but why the change
> for retire?

So we didn't leave retire_work work in an ill-defined state. I was
probably thinking consistency over cleverness. It's also highly probable
that I stuck something else in there at one point.

> > -     ecode = i915_error_generate_code(dev_priv, error, &engine_id);
> > +     for (i = 0; i < ARRAY_SIZE(error->engine); i++)
> > +             if (!error->engine[i].context.pid)
> > +                     engines &= ~BIT(i);
> 
> No more grouping for driver internal hangs...?

There's no pid, right, so the message at moment is garbled. The ecode is
just noise, you don't use it for analysis, you don't use it for error
state matching (or at least you shouldn't as it has no meaning wrt the
error state).

> >       len = scnprintf(error->error_msg, sizeof(error->error_msg),
> > -                     "GPU HANG: ecode %d:%d:0x%08x",
> > -                     INTEL_GEN(dev_priv), engine_id, ecode);
> > -
> > -     if (engine_id != -1 && error->engine[engine_id].context.pid)
> > +                     "GPU HANG: ecode %d:%lx:0x%08x",
> > +                     INTEL_GEN(error->i915), engines,
> > +                     i915_error_generate_code(error, engines));
> > +     if (engines) {
> > +             /* Just show the first executing process, more is confusing */
> > +             i = ffs(engines);
> 
> then why not just make the ecode accepting single engine and move it here.

I don't think the ecode should just accept a single engine. I don't
think the ecode should exist at all, but that's another matter.

> > -     /*
> > -      * We want to perform per-engine reset from atomic context (e.g.
> > -      * softirq), which imposes the constraint that we cannot sleep.
> > -      * However, experience suggests that spending a bit of time waiting
> > -      * for a reset helps in various cases, so for a full-device reset
> > -      * we apply the opposite rule and wait if we want to. As we should
> > -      * always follow up a failed per-engine reset with a full device reset,
> > -      * being a little faster, stricter and more error prone for the
> > -      * atomic case seems an acceptable compromise.
> > -      *
> > -      * Unfortunately this leads to a bimodal routine, when the goal was
> > -      * to have a single reset function that worked for resetting any
> > -      * number of engines simultaneously.
> > -      */
> > -     might_sleep_if(engine_mask == ALL_ENGINES);
> 
> Oh here it is. I was after this on atomic resets.

I was saying it didn't make sense to lift the restriction until we
relied upon. Just because the code became safe doesn't mean it was then
part of the API :)

> > +static void restart_work(struct work_struct *work)
> >  {
> > +     struct i915_gpu_restart *arg = container_of(work, typeof(*arg), work);
> > +     struct drm_i915_private *i915 = arg->i915;
> >       struct intel_engine_cs *engine;
> >       enum intel_engine_id id;
> > +     intel_wakeref_t wakeref;
> >  
> > -     lockdep_assert_held(&i915->drm.struct_mutex);
> > +     wakeref = intel_runtime_pm_get(i915);
> > +     mutex_lock(&i915->drm.struct_mutex);
> >  
> > -     i915_retire_requests(i915);
> 
> Can't do this anymore yes. What will be the effect
> of delaying this and the other explicit retirements?
> Are we more prone to starvation?

No. We risk gen3 spontaneously dying as we have no idea why it needs a
request in the pipeline soon after a reset, so no idea if a potential
delay will kill it.

> > +     smp_store_mb(i915->gpu_error.restart, NULL);
> 
> Checkpatch might want a comment for the mb.

Check patch is silly about this one. It's precisely because the other
side is unserialised is it smp_store_mb. But it's just a glorified
WRITE_ONCE not that important.

> >       for_each_engine(engine, i915, id) {
> > -             struct intel_context *ce;
> > -
> > -             reset_engine(engine,
> > -                          engine->hangcheck.active_request,
> > -                          stalled_mask & ENGINE_MASK(id));
> > -             ce = fetch_and_zero(&engine->last_retired_context);
> > -             if (ce)
> > -                     intel_context_unpin(ce);
> > +             struct i915_request *rq;
> >  
> >               /*
> >                * Ostensibily, we always want a context loaded for powersaving,
> >                * so if the engine is idle after the reset, send a request
> >                * to load our scratch kernel_context.
> > -              *
> > -              * More mysteriously, if we leave the engine idle after a reset,
> > -              * the next userspace batch may hang, with what appears to be
> > -              * an incoherent read by the CS (presumably stale TLB). An
> > -              * empty request appears sufficient to paper over the glitch.
> >                */
> > -             if (intel_engine_is_idle(engine)) {
> > -                     struct i915_request *rq;
> > +             if (!intel_engine_is_idle(engine))
> > +                     continue;
> 
> Why did you remove the comment on needing a empty request?

I don't believe it's true any more, and it was detracting from the
emphasis on the idea of restoring a context.

> Also if the request causing nonidle could be troublesome one,
> from troublesome context, why not just skip the idle check and
> always add one for kernel ctx?

Hence why I removed the comment. It's just a distraction. This whole
routine needs to be scrapped in favour of avoiding the request
allocation and just leaving the engine in a good state before
restarting. But I haven't thought of a nice way without allocations,
preallocating on the kernel_context may be possible, but it should also
be able to setup without hooking into any request machinery.

> > +#if 0
> > +#define __do_reset(fn, arg) stop_machine(fn, arg, NULL)
> 
> Lets remove the machinery to select reset stop_machine and the #include.

NO. It's there as a very specific reminder that this code has a glaring
caveat and that a very very simple fix is the line above.

> > +#else
> > +#define __do_reset(fn, arg) fn(arg)
> > +#endif
> > +
> > +static int do_reset(struct drm_i915_private *i915, unsigned int stalled_mask)
> > +{
> > +     struct __i915_reset arg = { i915, stalled_mask };
> > +     int err, i;
> > +
> > +     err = __do_reset(__i915_reset__BKL, &arg);
> > +     for (i = 0; err && i < 3; i++) {
> > +             msleep(100);
> > +             err = __do_reset(__i915_reset__BKL, &arg);
> > +     }
> > +
> > +     return err;
> > +}
> > +
> >  /**
> >   * i915_reset - reset chip after a hang
> >   * @i915: #drm_i915_private to reset
> > @@ -987,31 +962,22 @@ void i915_reset(struct drm_i915_private *i915,
> >  {
> >       struct i915_gpu_error *error = &i915->gpu_error;
> >       int ret;
> > -     int i;
> >  
> >       GEM_TRACE("flags=%lx\n", error->flags);
> >  
> >       might_sleep();
> 
> What will? I didn't spot anything in execlists_reset_prepare.

stop_machine or whatever barrier we come up with to provide the same
function is going to require sleeping to coordinate with userspace
memory transactions.

> > @@ -1115,18 +1058,16 @@ static inline int intel_gt_reset_engine(struct drm_i915_private *i915,
> >  int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
> >  {
> >       struct i915_gpu_error *error = &engine->i915->gpu_error;
> > -     struct i915_request *active_request;
> >       int ret;
> >  
> >       GEM_TRACE("%s flags=%lx\n", engine->name, error->flags);
> >       GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags));
> >  
> > -     active_request = reset_prepare_engine(engine);
> > -     if (IS_ERR_OR_NULL(active_request)) {
> > -             /* Either the previous reset failed, or we pardon the reset. */
> > -             ret = PTR_ERR(active_request);
> > -             goto out;
> > -     }
> > +     if (i915_seqno_passed(intel_engine_get_seqno(engine),
> > +                           intel_engine_last_submit(engine)))
> > +             return 0;
> 
> You seem to have a patch to remove this shortly after so
> squash?

No. That is quite a significant change in behaviour, we haven't been
poking HW like that up to that patch, and you know how picky HW can be.

It also has an impact on the selftests, and as such you want the
arguments be concise.
-Chris


More information about the Intel-gfx mailing list