[Intel-gfx] [PATCH v2] drm/i915/gt: Use virtual_engine during execlists_dequeue
Chris Wilson
chris at chris-wilson.co.uk
Mon May 18 13:09:04 UTC 2020
Quoting Tvrtko Ursulin (2020-05-18 14:01:27)
>
> On 18/05/2020 13:33, Chris Wilson wrote:
> > +static struct virtual_engine *
> > +first_virtual_engine(struct intel_engine_cs *engine)
> > +{
> > + struct intel_engine_execlists *el = &engine->execlists;
> > + struct rb_node *rb = rb_first_cached(&el->virtual);
> > +
> > + while (rb) {
> > + struct virtual_engine *ve =
> > + rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
> > + struct i915_request *rq = READ_ONCE(ve->request);
> > +
> > + if (!rq) { /* lazily cleanup after another engine handled rq */
> > + rb_erase_cached(rb, &el->virtual);
> > + RB_CLEAR_NODE(rb);
> > + rb = rb_first_cached(&el->virtual);
> > + continue;
> > + }
> > +
> > + if (!virtual_matches(ve, rq, engine)) {
> > + rb = rb_next(rb);
> > + continue;
> > + }
> > +
> > + return ve;
> > + }
> > +
> > + return NULL;
> > +}
> > - while (rb) { /* XXX virtual is always taking precedence */
> > - struct virtual_engine *ve =
> > - rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
> > + while (ve) { /* XXX virtual is always taking precedence */
> > struct i915_request *rq;
> >
> > spin_lock(&ve->base.active.lock);
> >
> > rq = ve->request;
> > - if (unlikely(!rq)) { /* lost the race to a sibling */
> > - spin_unlock(&ve->base.active.lock);
> > - rb_erase_cached(rb, &execlists->virtual);
> > - RB_CLEAR_NODE(rb);
> > - rb = rb_first_cached(&execlists->virtual);
> > - continue;
> > - }
> > + if (unlikely(!rq)) /* lost the race to a sibling */
> > + goto unlock;
>
> Doesn't this now rely on a later patch to clear the node?
This is cleared by first_virtual_engine
> >
> > GEM_BUG_ON(rq != ve->request);
> > GEM_BUG_ON(rq->engine != &ve->base);
> > GEM_BUG_ON(rq->context != &ve->context);
> >
> > - if (rq_prio(rq) >= queue_prio(execlists)) {
> > - if (!virtual_matches(ve, rq, engine)) {
> > - spin_unlock(&ve->base.active.lock);
> > - rb = rb_next(rb);
> > - continue;
> > - }
> > + if (rq_prio(rq) < queue_prio(execlists)) {
> > + spin_unlock(&ve->base.active.lock);
> > + break;
> > + }
> >
> > - if (last && !can_merge_rq(last, rq)) {
> > - spin_unlock(&ve->base.active.lock);
> > - start_timeslice(engine, rq_prio(rq));
> > - return; /* leave this for another sibling */
> > - }
> > + GEM_BUG_ON(!virtual_matches(ve, rq, engine));
>
> This as well.
As first_virtual_engine skips non-matching nodes, it should only
unmatch during the unlocked phase since the lookup if it is claimed by
another engine. Then ve->request would be set to NULL and we the above
check fails.
So I think this patch stands by itself.
-Chris
More information about the Intel-gfx
mailing list