[Intel-gfx] [PATCH 02/28] drm/i915: Rename execlists->queue_priority to preempt_priority_hint

Chris Wilson chris at chris-wilson.co.uk
Mon Jan 28 11:04:23 UTC 2019


Quoting Tvrtko Ursulin (2019-01-28 10:56:24)
> 
> On 28/01/2019 01:02, Chris Wilson wrote:
> > After noticing that we trigger preemption events for currently executing
> > requests, as well as requests that complete before the preemption and
> > attempting to suppress those preemption events, it is wise to not
> > consider the queue_priority to be authoritative. As we only track the
> > maximum priority seen between dequeue passes, if the maximum priority
> > request is no longer available for dequeuing (it completed or is even
> > executing on another engine), we have no knowledge of the previous
> > queue_priority as it would require us to keep a full history of enqueued
> > requests -- but we already have that history in the priolists!
> > 
> > Rename the queue_priority to preempt_priority_hint so that we do not
> > confuse it as being the maximum priority in the queue, but merely an
> > indication that we have seen a new maximum priority value and as such we
> > should check whether it should preempt the currently running request.
> > 
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_scheduler.c       | 11 +++++------
> >   drivers/gpu/drm/i915/intel_engine_cs.c      |  4 ++--
> >   drivers/gpu/drm/i915/intel_guc_submission.c |  5 +++--
> >   drivers/gpu/drm/i915/intel_lrc.c            | 20 +++++++++++---------
> >   drivers/gpu/drm/i915/intel_ringbuffer.h     |  8 ++++++--
> >   5 files changed, 27 insertions(+), 21 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
> > index 340faea6c08a..0da718ceab43 100644
> > --- a/drivers/gpu/drm/i915/i915_scheduler.c
> > +++ b/drivers/gpu/drm/i915/i915_scheduler.c
> > @@ -127,8 +127,7 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb)
> >       return rb_entry(rb, struct i915_priolist, node);
> >   }
> >   
> > -static void assert_priolists(struct intel_engine_execlists * const execlists,
> > -                          long queue_priority)
> > +static void assert_priolists(struct intel_engine_execlists * const execlists)
> >   {
> >       struct rb_node *rb;
> >       long last_prio, i;
> > @@ -139,7 +138,7 @@ static void assert_priolists(struct intel_engine_execlists * const execlists,
> >       GEM_BUG_ON(rb_first_cached(&execlists->queue) !=
> >                  rb_first(&execlists->queue.rb_root));
> >   
> > -     last_prio = (queue_priority >> I915_USER_PRIORITY_SHIFT) + 1;
> > +     last_prio = (INT_MAX >> I915_USER_PRIORITY_SHIFT) + 1;
> >       for (rb = rb_first_cached(&execlists->queue); rb; rb = rb_next(rb)) {
> >               const struct i915_priolist *p = to_priolist(rb);
> >   
> > @@ -166,7 +165,7 @@ i915_sched_lookup_priolist(struct intel_engine_cs *engine, int prio)
> >       int idx, i;
> >   
> >       lockdep_assert_held(&engine->timeline.lock);
> > -     assert_priolists(execlists, INT_MAX);
> > +     assert_priolists(execlists);
> >   
> >       /* buckets sorted from highest [in slot 0] to lowest priority */
> >       idx = I915_PRIORITY_COUNT - (prio & I915_PRIORITY_MASK) - 1;
> > @@ -353,7 +352,7 @@ static void __i915_schedule(struct i915_request *rq,
> >                               continue;
> >               }
> >   
> > -             if (prio <= engine->execlists.queue_priority)
> > +             if (prio <= engine->execlists.preempt_priority_hint)
> >                       continue;
> >   
> >               /*
> > @@ -366,7 +365,7 @@ static void __i915_schedule(struct i915_request *rq,
> >                       continue;
> >   
> >               /* Defer (tasklet) submission until after all of our updates. */
> > -             engine->execlists.queue_priority = prio;
> > +             engine->execlists.preempt_priority_hint = prio;
> 
> I am wondering whether stopping tracking the queue priority here, and 
> making it mean one thing only, would simplify things.

No, it doesn't simply things. We already track queue_priority (it's the
queue!) What we need is a very quick test as to whether we even need to
consider preemption.
 
> We make queue_priority strictly track the priority of whatever is in 
> port0 only, updated on dequeue and after context switch. Ie. 
> execlists.queue_priority gets the meaning of "top of the hw backend 
> queue priority".

That is already tracked in port0.
 
> For the purpose of kicking the tasklet that should work I think. It 
> wouldn't interrupt the port0 rq, and then on CS, dequeue would inspect 
> the real queue and see it there is need to preempt.

... hence the patches ...
 
> At the end of __i915_schedule we peek at top of the queue and decide 
> whether to kick the tasklet.
> 
> So we end up with two heads of queue priority. The HW backend one, and 
> the scheduling tree. Which seems like a clear separation of duties.
> 
> need_preempt() on dequeue then compares the two priorities only. Just 
> needs additional protection against not preempting the same context.
> 
> I hope I did not miss something, what do you think?

That we really, really need this patch as it appears to be quite easy to
confuse the purpose of what is being tracked here as opposed to what we
track for the HW status and the various queues.

And that the actual priority value of port0 does not reflect its
post-premption value! Nor that we know accurately the preemption hint
having applied any suppression or veng.
-Chris


More information about the Intel-gfx mailing list