[Intel-gfx] [PATCH 01/19] drm/i915/execlists: Always clear ring_pause if we do not submit

Chris Wilson chris at chris-wilson.co.uk
Mon Jun 24 09:09:59 UTC 2019


Quoting Mika Kuoppala (2019-06-24 10:03:48)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> 
> > In the unlikely case (thank you CI!), we may find ourselves wanting to
> > issue a preemption but having no runnable requests left. In this case,
> > we set the semaphore before computing the preemption and so must unset
> > it before forgetting (or else we leave the machine busywaiting until the
> > next request comes along and so likely hang).
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > ---
> >  drivers/gpu/drm/i915/gt/intel_lrc.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index c8a0c9b32764..efccc31887de 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -233,13 +233,18 @@ static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
> >  static inline void
> >  ring_set_paused(const struct intel_engine_cs *engine, int state)
> >  {
> > +     u32 *sema = &engine->status_page.addr[I915_GEM_HWS_PREEMPT];
> > +
> > +     if (*sema == state)
> > +             return;
> > +
> 
> So you want to avoid useless wmb, as I don't see other
> benefit. Makes this look suspiciously racy but seems
> to be just my usual paranoia.

It's always set under the execlists spinlock.

> >       /*
> >        * We inspect HWS_PREEMPT with a semaphore inside
> >        * engine->emit_fini_breadcrumb. If the dword is true,
> >        * the ring is paused as the semaphore will busywait
> >        * until the dword is false.
> >        */
> > -     engine->status_page.addr[I915_GEM_HWS_PREEMPT] = state;
> > +     *sema = state;
> >       wmb();
> >  }
> >  
> > @@ -1243,6 +1248,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> >               *port = execlists_schedule_in(last, port - execlists->pending);
> >               memset(port + 1, 0, (last_port - port) * sizeof(*port));
> >               execlists_submit_ports(engine);
> > +     } else {
> > +             ring_set_paused(engine, 0);
> 
> This looks like a right thing to do. But why did we end up
> figuring things out wrong in need_preempt()?

It's because we didn't find anything else that needed the preemption
after checking what came next in the queue -- it has already been
completed by earlier submission.
 
> One would think that if there were nothing to preempt into,
> we would never set the pause in the first place.

I hear you -- we try very hard to not even look for preemption.
False preemption cycles show up as bad scheduling behaviour for
saturated transcode jobs.
 
> Also the preempt to idle cycle mention in effective_prio()
> seems to be off. Could be that someone forgot to
> point that out when he did review preempt-to-busy.

Preempt-to-busy still has an effective idle point :-p
-Chris


More information about the Intel-gfx mailing list