[Intel-gfx] [PATCH 9/9] drm/i915/execlists: Report GPU rendering as IO activity to cpufreq.

Chris Wilson chris at chris-wilson.co.uk
Thu Mar 29 01:01:37 UTC 2018


Quoting Francisco Jerez (2018-03-29 01:32:07)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> 
> > Quoting Chris Wilson (2018-03-28 20:20:19)
> >> Quoting Francisco Jerez (2018-03-28 19:55:12)
> >> > Hi Chris,
> >> > 
> >> [snip]
> >> > That said, it wouldn't hurt to call each of them once per sequence of
> >> > overlapping requests.  Do you want me to call them from
> >> > execlists_set/clear_active() themselves when bit == EXECLISTS_ACTIVE_USER,
> >> > or at each callsite of execlists_set/clear_active()? [possibly protected
> >> > with a check that execlists_is_active(EXECLISTS_ACTIVE_USER) wasn't
> >> > already the expected value]
> >> 
> >> No, I was thinking of adding an execlists_start()/execlists_stop()
> >> [naming suggestions welcome, begin/end?] where we could hook additional
> >> bookkeeping into.
> >
> > Trying to call execlist_begin() once didn't pan out. It's easier to
> > reuse for similar bookkeeping used in future patches if execlist_begin()
> > (or whatever name suits best) at the start of each context.
> >
> > Something along the lines of:
> >
> > @@ -374,6 +374,19 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status)
> >                                    status, rq);
> >  }
> >  
> > +static inline void
> > +execlists_begin(struct intel_engine_execlists *execlists,
> 
> I had started writing something along the same lines in my working tree
> called execlists_active_user_begin/end -- Which name do you prefer?

Hmm, so the reason why the user distinction exists is that we may
momentarily remain active after the last user context is switch out, if
there is a preemption event still pending. Keeping the user tag does
help maintain that distinction.

> 
> > +               struct execlist_port *port)
> > +{
> 
> What do you expect the port argument to be useful for?  Is it ever going
> to be anything other than execlists->port?

There are patches afoot to change that, so since I needed to inspect
port here it seemed to tie in nicely with the proposed changes to
execlists_port_complete.

> > +       execlists_set_active_once(execlists, EXECLISTS_ACTIVE_USER);
> > +}
> > +
> > +static inline void
> > +execlists_end(struct intel_engine_execlists *execlists)
> > +{
> > +       execlists_clear_active(execlists, EXECLISTS_ACTIVE_USER);
> > +}
> > +
> >  static inline void
> >  execlists_context_schedule_in(struct i915_request *rq)
> >  {
> > @@ -710,7 +723,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> >         spin_unlock_irq(&engine->timeline->lock);
> >  
> >         if (submit) {
> > -               execlists_set_active(execlists, EXECLISTS_ACTIVE_USER);
> > +               execlists_begin(execlists, execlists->port);
> >                 execlists_submit_ports(engine);
> >         }
> >  
> > @@ -741,7 +754,7 @@ execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
> >                 port++;
> >         }
> >  
> > -       execlists_clear_active(execlists, EXECLISTS_ACTIVE_USER);
> 
> This line doesn't seem to exist in my working tree.  I guess it was just
> added?

A few days ago, yes.

> > +       execlists_end(execlists);
> >  }
> >  
> >  static void clear_gtiir(struct intel_engine_cs *engine)
> > @@ -872,7 +885,7 @@ static void execlists_submission_tasklet(unsigned long data)
> >  {
> >         struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
> >         struct intel_engine_execlists * const execlists = &engine->execlists;
> > -       struct execlist_port * const port = execlists->port;
> > +       struct execlist_port *port = execlists->port;
> >         struct drm_i915_private *dev_priv = engine->i915;
> >         bool fw = false;
> >  
> > @@ -1010,9 +1023,19 @@ static void execlists_submission_tasklet(unsigned long data)
> >  
> >                         GEM_BUG_ON(count == 0);
> >                         if (--count == 0) {
> > +                               /*
> > +                                * On the final event corresponding to the
> > +                                * submission of this context, we expect either
> > +                                * an element-switch event or an completion
> > +                                * event (and on completion, the active-idle
> > +                                * marker). No more preemptions, lite-restore
> > +                                * or otherwise
> > +                                */
> >                                 GEM_BUG_ON(status & GEN8_CTX_STATUS_PREEMPTED);
> >                                 GEM_BUG_ON(port_isset(&port[1]) &&
> >                                            !(status & GEN8_CTX_STATUS_ELEMENT_SWITCH));
> > +                               GEM_BUG_ON(!port_isset(&port[1]) &&
> > +                                          !(status & GEN8_CTX_STATUS_ACTIVE_IDLE));
> >                                 GEM_BUG_ON(!i915_request_completed(rq));
> >                                 execlists_context_schedule_out(rq);
> >                                 trace_i915_request_out(rq);
> > @@ -1021,17 +1044,14 @@ static void execlists_submission_tasklet(unsigned long data)
> >                                 GEM_TRACE("%s completed ctx=%d\n",
> >                                           engine->name, port->context_id);
> >  
> > -                               execlists_port_complete(execlists, port);
> > +                               port = execlists_port_complete(execlists, port);
> > +                               if (port_isset(port))
> > +                                       execlists_begin(execlists, port);
> 
> Isn't this going to call execlists_begin() roughly once per request?
> What's the purpose if we expect it to be a no-op except for the first
> request submitted after execlists_end()?  Isn't the intention to provide
> a hook for bookkeeping that depends on idle to active and active to idle
> transitions of the hardware?

Because on a context switch I need to update the GPU clocks, update
tracking for preemption, and that's just the two I have in my tree that
need to land in the next couple of weeks...

Currently I have,

static inline void
execlists_begin(struct intel_engine_execlists *execlists,
                struct execlist_port *port)
{
        struct intel_engine_cs *engine =
                container_of(execlists, typeof(*engine), execlists);
        struct i915_request *rq = port_request(port);

        execlists_set_active_once(execlists, EXECLISTS_ACTIVE_USER);

        intel_rps_update_engine(engine, rq->ctx);
        execlists->current_priority = rq_prio(rq);
}

static inline void
execlists_end(struct intel_engine_execlists *execlists)
{
        struct intel_engine_cs *engine =
                container_of(execlists, typeof(*engine), execlists);

        execlists->current_priority = INT_MIN;
        intel_rps_update_engine(engine, NULL);

        execlists_clear_active(execlists, EXECLISTS_ACTIVE_USER);
}

> > +                               else
> > +                                       execlists_end(execlists);
> >                         } else {
> >                                 port_set(port, port_pack(rq, count));
> >                         }
> >
> 
> Isn't there an "execlists_clear_active(execlists,
> EXECLISTS_ACTIVE_USER);" call in your tree a few lines below that is now
> redundant?

This is only half the patch to show the gist. :)
-Chris


More information about the Intel-gfx mailing list