[Intel-gfx] [PATCH v3] drm/i915/execlists: Reclaim the hanging virtual request

Chris Wilson chris at chris-wilson.co.uk
Tue Jan 21 14:07:14 UTC 2020


Quoting Tvrtko Ursulin (2020-01-21 13:55:29)
> 
> 
> On 21/01/2020 13:04, Chris Wilson wrote:
> > +             GEM_BUG_ON(!reset_in_progress(&engine->execlists));
> > +
> > +             /*
> > +              * An unsubmitted request along a virtual engine will
> > +              * remain on the active (this) engine until we are able
> > +              * to process the context switch away (and so mark the
> > +              * context as no longer in flight). That cannot have happened
> > +              * yet, otherwise we would not be hanging!
> > +              */
> > +             spin_lock_irqsave(&ve->base.active.lock, flags);
> > +             GEM_BUG_ON(intel_context_inflight(rq->context) != engine);
> > +             GEM_BUG_ON(ve->request != rq);
> > +             ve->request = NULL;
> > +             spin_unlock_irqrestore(&ve->base.active.lock, flags);
> > +
> > +             rq->engine = engine;
> 
> Lets see I understand this... tasklet has been disabled and ring paused. 
> But we find an uncompleted request in the ELSP context, with rq->engine 
> == virtual engine. Therefore this cannot be the first request on this 
> timeline but has to be later.

Not quite.

engine->execlists.active[] tracks the HW, it get's updated only upon
receiving HW acks (or we reset).

So if execlists_active()->engine == virtual, it can only mean that the
inflight _hanging_ request has already been unsubmitted by an earlier
preemption in execlists_dequeue(), but that preemption has not yet been
processed by the HW. (Hence the preemption-reset underway.)

Now while we coalesce the requests for a context into a single ELSP[]
slot, and only record the last request submitted for a context, we have
to walk back along that context's timeline to find the earliest
incomplete request and blame the hang upon it.

For a virtual engine, it's much simpler as there is only ever one
request in flight, but I don't think that has any impact here other
than that we only need to repair the single unsubmitted request that was
returned to the virtual engine.

> One which has been put on the runqueue but 
> not yet submitted to hw. (Because one at a time.) Or it has been 
> unsubmitted by __unwind_incomplete_request already. In the former case 
> why move it to the physical engine? Also in the latter actually, it 
> would overwrite rq->engine with the physical one.

Yes. For incomplete preemption event, the request is *still* on this
engine and has not been released (rq->context->inflight == engine, so it
cannot be submitted to any other engine, until after we acknowledge the
context has been saved and is no longer being accessed by HW.) It is
legal for us to process the hanging request along this engine; we have a
suboptimal decision to return the request to the same engine after the
reset, but since we have replaced the hanging payload, the request is a
mere signaling placeholder (and I do not think will overly burden the
system and negatively impact other virtual engines).
-Chris


More information about the Intel-gfx mailing list