[Intel-gfx] [RFC PATCH 2/2] Fix per client busyness locking

Dixit, Ashutosh ashutosh.dixit at intel.com
Wed Sep 7 07:26:10 UTC 2022


On Tue, 06 Sep 2022 11:29:15 -0700, Umesh Nerlige Ramappa wrote:
>
> On Thu, Sep 01, 2022 at 04:55:22PM -0700, Dixit, Ashutosh wrote:
> > On Wed, 31 Aug 2022 15:45:49 -0700, Umesh Nerlige Ramappa wrote:
> >>
> >
> > Hi Umesh,
> >
> > I have updated my RFC patch based on your feedback so we can discuss again.
> >
> >> On Wed, Aug 31, 2022 at 12:33:55PM -0700, Ashutosh Dixit wrote:
> >> > 1. Do all ce->stats updates and reads under guc->timestamp.lock
> >>
> >> Other than the question about READ_ONCE/WRITE_ONCE, I am not sure I
> >> understand what's broken in the locking logic here. Can you please add some
> >> description? thanks
> >
> > Basically ce->stats.runtime.total and ce->stats.active are tied together so
> > should be accessed/updated atomically. Also just the update of
> > ce->stats.runtime.total (via lrc_update_runtime()) can have multiple
> > concurrent writers so even that has to be protected (since that update is
> > not atomic).
> >
> > These was the reason for:
> > * Introducing lrc_update_runtime_locked
> > * Returning early from intel_context_get_total_runtime_ns otherwise we'll
> >  need to introduce the same locking there
> > * Enforcing locking in guc_context_update_stats (which was there in your
> >  original patch too)
> >
> > (I think execlists code didn't have this issue because there
> > lrc_update_runtime is called from a tasklet (which iirc is like a single
> > thread/cpu). I am not sure how 'active' is updated in execlist mode so
> > there may or may not be a problem in intel_context_get_total_runtime_ns).
> >
> > I have another question: what about that GPU vs. GuC race mentioned in the
> > comment? What is the sequence of events there between GPU updating the
> > timestamp in the context image vs. GuC setting engine_id to -1 (at least
> > what is the sequence in which GPU and GuC supposed to do these updates
> > though it may not matter to i915 due to scheduling delays)?
>
> With GPU, KMD and GuC running concurrently, after a context is done, this
> is the sequence.
>
> GPU) updates context image (total_runtime)
> KMD) reads total_runtime.
> KMD) sees engine_id is valid. KMD uses start_time from pphwsp to calculate
> active_time.
> KMD) returns total_runtime + active_time (double accounted busyness!!)
> GuC) gets ctxt switchout event and sets engine_id and start_time to ~0

Thanks for the explanation, so yes I understand the double accounting now.

>
> This is not rare when running the IGT tests, so the total runtime is
> updated in the query only if engine_id is idle. continued below ...
>
> >
> >>
> >> > 2. Pin context image before reading
> >> > 3. Merge __guc_context_update_clks and guc_context_update_stats into a
> >> >   single function
> >> > 4. Call lrc_update_runtime() unconditionally in guc_context_update_stats
> >> > 5. Seems no need to update ce->stats.active with this approach
> >> >
> >> > Some of the above steps may not be correct or complete but the idea is to
> >> > discuss/improve the original patch.
> >> >
> >> > Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
> >> > Signed-off-by: Ashutosh Dixit <ashutosh.dixit at intel.com>
> >> > ---
> >> > drivers/gpu/drm/i915/gt/intel_context.c       |  2 +-
> >> > drivers/gpu/drm/i915/gt/intel_context_types.h |  2 +-
> >> > .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 41 ++++++++++---------
> >> > 3 files changed, 24 insertions(+), 21 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> >> > index e2d70a9fdac0..c004f676de27 100644
> >> > --- a/drivers/gpu/drm/i915/gt/intel_context.c
> >> > +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> >> > @@ -581,7 +581,7 @@ u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
> >> >	u64 total, active;
> >> >
> >> >	if (ce->ops->update_stats)
> >> > -		ce->ops->update_stats(ce);
> >> > +		return ce->ops->update_stats(ce);
> >>
> >> This is an improvement that we can add and eventually, we can make this a
> >> GuC specific vfunc. Doing so may also remove the COPS_RUNTIME_ACTIVE_TOTAL
> >> option that I added to GuC specific context ops.
> >
> > I went ahead and did this in v2 of the RFC patch.
> >
> >> >
> >> >	total = ce->stats.runtime.total;
> >> >	if (ce->ops->flags & COPS_RUNTIME_CYCLES)
> >> > diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >> > index f7ff4c7d81c7..699435bfff99 100644
> >> > --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> >> > +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >> > @@ -59,7 +59,7 @@ struct intel_context_ops {
> >> >
> >> >	void (*sched_disable)(struct intel_context *ce);
> >> >
> >> > -	void (*update_stats)(struct intel_context *ce);
> >> > +	u64 (*update_stats)(struct intel_context *ce);
> >> >
> >> >	void (*reset)(struct intel_context *ce);
> >> >	void (*destroy)(struct kref *kref);
> >> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >> > index bee8cf10f749..40d0edaf2e0a 100644
> >> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >> > @@ -1376,7 +1376,6 @@ static void __update_guc_busyness_stats(struct intel_guc *guc)
> >> >	spin_unlock_irqrestore(&guc->timestamp.lock, flags);
> >> > }
> >> >
> >> > -static void __guc_context_update_clks(struct intel_context *ce);
> >> > static void guc_timestamp_ping(struct work_struct *wrk)
> >> > {
> >> >	struct intel_guc *guc = container_of(wrk, typeof(*guc),
> >> > @@ -1401,7 +1400,8 @@ static void guc_timestamp_ping(struct work_struct *wrk)
> >> >
> >> >	/* adjust context stats for overflow */
> >> >	xa_for_each(&guc->context_lookup, index, ce)
> >> > -		__guc_context_update_clks(ce);
> >> > +		if (ce->ops->update_stats)
> >> > +			ce->ops->update_stats(ce);
> >>
> >> context pinning logic needs to be separated for this (ping) path vs. the
> >> query path - intel_context_get_total_runtime_ns().
> >
> > Done in v2 of RFC patch.
> >
> >> >
> >> >	intel_gt_reset_unlock(gt, srcu);
> >> >
> >> > @@ -1476,17 +1476,21 @@ void intel_guc_busyness_unpark(struct intel_gt *gt)
> >> >			 guc->timestamp.ping_delay);
> >> > }
> >> >
> >> > -static void __guc_context_update_clks(struct intel_context *ce)
> >> > +static u64 guc_context_update_stats(struct intel_context *ce)
> >> > {
> >> >	struct intel_guc *guc = ce_to_guc(ce);
> >> >	struct intel_gt *gt = ce->engine->gt;
> >> >	u32 *pphwsp, last_switch, engine_id;
> >> > -	u64 start_gt_clk, active;
> >> >	unsigned long flags;
> >> > +	u64 total, active = 0;
> >> >	ktime_t unused;
> >> >
> >> > +	intel_context_pin(ce);
> >>
> >> intel_context_pin can sleep and we are not allowed to sleep in this path -
> >> intel_context_get_total_runtime_ns(), however we can sleep in the ping
> >> worker path, so ideally we want to separate it out for the 2 paths.
> >
> > Do we know which intel_context_get_total_runtime_ns() call is not allowed
> > to sleep? E.g. the code path from i915_drm_client_fdinfo() is allowed to
> > sleep. As mentioned I have done this is v2 of RFC patch which I think is
> > sufficient, but a more complicated scheme (which I think we can avoid for
> > now) would be to pin in code paths when sleeping is allowed.
> >
>
> Hmm, maybe intel_context_get_total_runtime_ns can sleep, not sure why I was
> assuming that this falls in the perf_pmu path. This is now in the
> drm_fdinfo query path. + Tvrtko.
>
> @Tvrtko, any idea if intel_context_get_total_runtime_ns path can sleep?

intel_context_get_total_runtime_ns is called from multiple places so it is
possible some of the code paths can sleep and others cannot.

> >> >	spin_lock_irqsave(&guc->timestamp.lock, flags);
> >> >
> >> > +	lrc_update_runtime(ce);
> >> > +	total = ce->stats.runtime.total;
> >>
> >> That would get called every second (default intel_gpu_top query internal)
> >> for a long running workload. multiply that with all active contexts.
> >>
> >> For long running contexts and frequenct queries, calling this ^ when a
> >> context is active does not add value. I would just call it in the else like
> >> before.
> >
> > I have done this in v2 but are we certain that when a context is active
> > it's ce->stats.runtime.total has been updated? Is it updated on each
> > context save or only when scheduling is disabled (which would not happen if
> > the context is active). That was the reason for my keeping it out of the
> > else.
>
> continued here ...
>
> You are right, it's not updated until scheduling is disabled or a query
> falls in the else block. We can run into a case where the
> stats.runtime.total has not yet been updated because the context is
> switching in/out too frequently and the query always lands in the active
> block. In this case busyness values will accumulate the time between
> switch-out to switch-in and eventually drop when ce->stats.runtime.total is
> updated. This is an issue we need to resolve.

I agree that with this state of things there is nothing we can do in the
KMD which will mitigate this double accounting.

> I think we should track the previous value of total_runtime. On a query, if
> total_runtime has changed, we should just return that (irrespective of
> whether engine is active or not. Active time should be part of the busyness
> calculations only if total_runtime has not changed (compared to previous
> sample).

Yes this will keep busyness values monotonic though it is still error
prone. Also maybe we can drop the monotonic requirement to not add any
extra code in the KMD with the justification that this is a GuC FW bug (see
next paragraph).

I think GuC also needs to provide the 'total' time (in addition to the
'start' time). So when GuC sets engine id to ~0 it should also write the
'total' time (like the GPU does) (preferably as 64 bits, we seem to have
introduce so much complexity by not using the right data types). If updates
are non-atomic GuC can also add a flag saying 'updates are in progress'. I
think that is all that is need from the GuC and we can resolve this double
accounting.

Thanks.
--
Ashutosh


> >
> >> > +
> >> >	/*
> >> >	 * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched
> >> >	 * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU)
> >> > @@ -1502,27 +1506,26 @@ static void __guc_context_update_clks(struct intel_context *ce)
> >> >	guc_update_pm_timestamp(guc, &unused);
> >> >
> >> >	if (engine_id != 0xffffffff && last_switch) {
> >> > -		start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk);
> >> > -		__extend_last_switch(guc, &start_gt_clk, last_switch);
> >> > -		active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk);
> >> > -		WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk);
> >> > -		WRITE_ONCE(ce->stats.active, active);
> >> > -	} else {
> >> > -		lrc_update_runtime(ce);
> >> > +		__extend_last_switch(guc, &ce->stats.runtime.start_gt_clk, last_switch);
> >> > +		active = intel_gt_clock_interval_to_ns(gt,
> >> > +			      guc->timestamp.gt_stamp - ce->stats.runtime.start_gt_clk);
> >>
> >> Makes sense. Earlier it was a rmw, now it's just a write to
> >> ce->stats.runtime.start_gt_clk.
> >
> > It is still a rmw, just that it is not as explicitly seen in the code as
> > was the case earlier.
> >
> >> >	}
> >> >
> >> >	spin_unlock_irqrestore(&guc->timestamp.lock, flags);
> >> > +	intel_context_unpin(ce);
> >> > +
> >> > +	return total + active;
> >>
> >> return ce->stats.runtime.total + active;
> >>
> >> and then drop the local copy of total by bringing back the else{}.
> >
> > Some changes here in v2, ce->stats.active had to be revived since we need
> > to return previously computed value when we cannot pin.
>
> Let's wait for Tvrtko's inputs. If this can sleep, I'd just use the
> original changes you had.
>
> >
> >> > }
> >> >
> >> > -static void guc_context_update_stats(struct intel_context *ce)
> >> > +void lrc_update_runtime_locked(struct intel_context *ce)
> >> > {
> >> > -	if (!intel_context_pin_if_active(ce)) {
> >> > -		WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0);
> >> > -		WRITE_ONCE(ce->stats.active, 0);
> >> > -		return;
> >> > -	}
> >> > +	struct intel_guc *guc = ce_to_guc(ce);
> >> > +	unsigned long flags;
> >> >
> >> > -	__guc_context_update_clks(ce);
> >> > +	intel_context_pin(ce);
> >> > +	spin_lock_irqsave(&guc->timestamp.lock, flags);
> >> > +	lrc_update_runtime(ce);
> >> > +	spin_unlock_irqrestore(&guc->timestamp.lock, flags);
> >> >	intel_context_unpin(ce);
> >>
> >> From where lrc_update_runtime_locked is being called, I would assume that
> >> the context is already pinned (until unpin_guc_id is called).
> >
> > Fixed in v2.
> >
> > From the other email:
> >
> >> From your rfc patch, I like
> >> - the idea of not touching ce->stats.active
> >> - having the update_stats return u64
> >> - not doing a rmw for start_gt_clk
> >>
> >> With those changes, we are only accessing total in ce->stats, so we don't
> >> really need a lrc_update_runtime_locked.
> >
> > Strictly speaking, (as explained above) because the computation of
> > ce->stats.runtime.total in lrc_update_runtime is not atomic and there are
> > potentially multiple concurrent callers of lrc_update_runtime,
> > lrc_update_runtime_locked is needed and I have retained it in v2.
> >
> > When you say we don't need it, you probably mean that because we are
> > dealing with something like busyness, we can tolerate occasional errors in
> > busyness if we don't have locking (and we can't verify busyness values
> > anyway though I think they are expected to be monotonically
> > increasing). But my question is why not keep it simple and retain the
> > locking?
>
> The locking you suggested earlier is fine. I think I assumed the
> ce->stats.runtime.total is being updated atomically, but I see your point.
>
> Thanks,
> Umesh
>
> >
> > Thanks.
> > --
> > Ashutosh
> >
> >> > }
> >> >
> >> > @@ -2780,7 +2783,7 @@ static void guc_context_unpin(struct intel_context *ce)
> >> > {
> >> >	struct intel_guc *guc = ce_to_guc(ce);
> >> >
> >> > -	lrc_update_runtime(ce);
> >> > +	lrc_update_runtime_locked(ce);
> >> >	unpin_guc_id(guc, ce);
> >> >	lrc_unpin(ce);
> >> >
> >> > --
> >> > 2.34.1
> >> >


More information about the Intel-gfx mailing list