[Intel-gfx] [PATCH 3/3] drm/i915/execlists: Suppress redundant preemption

Chris Wilson chris at chris-wilson.co.uk
Wed Jan 23 14:14:05 UTC 2019


Quoting Chris Wilson (2019-01-23 13:47:29)
> Quoting Chris Wilson (2019-01-23 12:36:02)
> > On unwinding the active request we give it a small (limited to internal
> > priority levels) boost to prevent it from being gazumped a second time.
> > However, this means that it can be promoted to above the request that
> > triggered the preemption request, causing a preempt-to-idle cycle for no
> > change. We can avoid this if we take the boost into account when
> > checking if the preemption request is valid.
> > 
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/intel_lrc.c | 29 +++++++++++++++++++++++++----
> >  1 file changed, 25 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> > index d9d744f6ab2c..74726f647e47 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -163,6 +163,8 @@
> >  #define WA_TAIL_DWORDS 2
> >  #define WA_TAIL_BYTES (sizeof(u32) * WA_TAIL_DWORDS)
> >  
> > +#define ACTIVE_PRIORITY (I915_PRIORITY_NEWCLIENT)
> > +
> >  static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
> >                                             struct intel_engine_cs *engine,
> >                                             struct intel_context *ce);
> > @@ -181,13 +183,31 @@ static inline int rq_prio(const struct i915_request *rq)
> >         return rq->sched.attr.priority;
> >  }
> >  
> > +static inline int active_prio(const struct i915_request *rq)
> > +{
> > +       int prio = rq_prio(rq);
> > +
> > +       /*
> > +        * On unwinding the active request, we give it a priority bump
> > +        * equivalent to a freshly submitted request. This protects it from
> > +        * being gazumped again, but it would be preferrable if we didn't
> > +        * let it be gazumped in the first place!
> > +        *
> > +        * See __unwind_incomplete_requests()
> > +        */
> > +       if (i915_request_started(rq))
> > +               prio |= ACTIVE_PRIORITY;
> 
> Hmm, actually we are put to the tail of that priolist so we don't get
> rerun ahead of the preemptee if we end up at the same priority.
> ACTIVE_PRIORITY - 1 would seem to be the right compromise.

gem_sync/switch-default says ACTIVE_PRIORITY though. Hmm.
-Chris


More information about the Intel-gfx mailing list