[Intel-gfx] [PATCH] i915/pmu: Wire GuC backend to per-client busyness

Dixit, Ashutosh ashutosh.dixit at intel.com
Thu Aug 25 06:18:14 UTC 2022


On Wed, 15 Jun 2022 10:42:08 -0700, Umesh Nerlige Ramappa wrote:
>
> >>>> +static void __guc_context_update_clks(struct intel_context *ce)
> >>>> +{
> >>>> +    struct intel_guc *guc = ce_to_guc(ce);
> >>>> +    struct intel_gt *gt = ce->engine->gt;
> >>>> +    u32 *pphwsp, last_switch, engine_id;
> >>>> +    u64 start_gt_clk = 0, active = 0;
> >>>
> >>> No need to init these two.
> >>>
> >>>> +    unsigned long flags;
> >>>> +    ktime_t unused;
> >>>> +
> >>>> +    spin_lock_irqsave(&guc->timestamp.lock, flags);
> >>>> +
> >>>> +    pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET;
> >>>> +    last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]);
> >>>> +    engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]);
> >>>> +
> >>>> +    guc_update_pm_timestamp(guc, &unused);
> >>>> +
> >>>> +    if (engine_id != 0xffffffff && last_switch) {
> >>>> +        start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk);
> >>>> +        __extend_last_switch(guc, &start_gt_clk, last_switch);
> >>>> +        active = intel_gt_clock_interval_to_ns(gt,
> >>>> guc->timestamp.gt_stamp - start_gt_clk);
> >>>> +        WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk);
> >>>> +        WRITE_ONCE(ce->stats.active, active);
> >>>> +    } else {
> >>>> +        lrc_update_runtime(ce);
> >>>
> >>> Why is this called from here? Presumably it was called already from
> >>> guc_context_unpin if here code things context is not active. Or will be
> >>> called shortly, once context save is done.
> >>
> >> guc_context_unpin is only called in the path of ce->sched_disable. The
> >> sched_disable is implemented in GuC (H2G message). Once the
> >> corresponding G2H response is received, the context is actually
> >> unpinned, eventually calling guc_context_unpin. Also the context may not
> >> necessarily be disabled after each context exit.
> >
> > So if I understand correctly, lrc runtime is only updated if someone is
> > reading the busyness and not as part of normal context state transitions?
>
> If you mean context_in/out events (like csb interrupts), only GuC can see
> those events. KMD has no visibility into that. These 3 paths call
> lrc_update_runtime.
>
> user query: (engine_id != 0xffffffff && last_switch) translates to GuC
> being within context_in and context_out events, so updating it outside of
> this window is one way to report the correct busyness.
>
> worker: guc_timestamp_ping()  also updates context stats (infrequently) for
> all contexts primarily to take care of overflows.
>
> context unpin: Existing code calls lrc_update_runtime only when unpinning
> the context which takes care of accumulating busyness when requests are
> retired.

Will adding lrc_update_runtime() to lrc_unpin() work?


More information about the Intel-gfx mailing list