[Intel-xe] [PATCH v2 2/2] drm/xe/pmu: Enable PMU interface

Dixit, Ashutosh ashutosh.dixit at intel.com
Fri Jul 7 21:25:17 UTC 2023


On Fri, 07 Jul 2023 03:42:36 -0700, Iddamsetty, Aravind wrote:
>

Hi Aravind,

> On 07-07-2023 11:38, Dixit, Ashutosh wrote:
> > On Thu, 06 Jul 2023 20:53:47 -0700, Iddamsetty, Aravind wrote:
> > I will look at the timing stuff later but one further question about the
> > requirement:
> >
> >>> Also, could you please explain where the requirement to expose these OAG
> >>> group busy/free registers via the PMU is coming from? Since these are OA
> >>> registers presumably they can be collected using the OA subsystem.
> >>
> >> L0 sysman needs this
> >> https://spec.oneapi.io/level-zero/latest/sysman/api.html#zes-engine-properties-t
> >> and xpumanager uses this
> >> https://github.com/intel/xpumanager/blob/master/core/src/device/gpu/gpu_device.cpp
> >
> > So fine these are UMD requirements, but why do these quantities (everything
> > in this patch) have to exposed via PMU? I could just create sysfs or an
> > ioctl to provide these to userland, right?
>
> PMU is enhanced interface to present the metrics, it provides low
> latency reads compared to sysfs

Why lower latency compared to sysfs? In both cases we have user to kernel
transitions and then register reads etc.

> and one can read multiple events in a single shot

Yes, this PMU can do and sysfs can't, though ioctl's can do this.

> and it will give timestamps as well which sysfs cannot provide and which
> is one of the requirements of UMD.

Ioctl's can do this if implement (counter, timestamp) pairs, but I agree
this may look strange so PMU does seem to have an advantage here.

But are these timestamps needed? The spec talks about different timestamp
bases but in this case we have already converted to ns and I am wondering
if the UMD can use it's own timestamps (maybe average of the ioctl call and
return from ioctl) if UMD needs timestamps.

> Also UMDs/ observability tools do not want to have any open handles to
> get these info so ioctl is dropped out.

Why? This also I don't follow. And UMD has an perf pmu fd open. See
igt at perf_pmu@module-unload e.g. which tests that module unload should fail
if the perf pmu fd is open (which takes a ref count on the module).

> the other motivation to use PMU in xe is the existing tools like
> intel_gpu_top will work with just a minor change.

Not too concerned about userspace tools. They can be changed to use a
different interface.

So I am still not convinced xe needs to expose a PMU interface with these
sort of "software events/counters". So my question is why can't we just
have an ioctl to expose these things, why PMU?

Incidentally if you look at amdgpu_pmu.c, they seem to exposing some
hardware sort of events through the PMU, not our kind of software stuff.

Another interesting thing is if we have ftrace statements they seem to
automatically be exposed by PMU
(https://perf.wiki.kernel.org/index.php/Tutorial), e.g.:

  i915:i915_request_add                              [Tracepoint event]
  i915:i915_request_queue                            [Tracepoint event]
  i915:i915_request_retire                           [Tracepoint event]
  i915:i915_request_wait_begin                       [Tracepoint event]
  i915:i915_request_wait_end                         [Tracepoint event]

So I am wondering if this might be an option?

So anyway let's try to understand the need for the PMU interface a bit more
before deciding on this. Once we introduce the interface (a) people will
willy nilly start exposing random stuff through that inteface (b) same
stuff will get exposed via multiple interfaces (e.g. frequency and rc6
residency in i915) etc. I am speaking on the basis of what I saw in i915.

Let's see if Tvrtko responds, otherwise I will try to get him on irc or
something. It will be good to have some input from maybe one of the
architects too about this.

Thanks.
--
Ashutosh

> > I had this same question about i915 PMU which was never answered. i915 PMU
> > IMO does truly strange things like sample freq's every 5 ms and provides
> > software averages which I thought userspace can easily do.
>
> that is a different thing nothing to do with PMU interface
>
> Thanks,
> Aravind.
> >
> > I don't think it's the timestamps, maybe there is some convention related
> > to the cpu pmu (which I am not familiar with).
> >
> > Let's see, maybe Tvrtko can also answer why these things were exposed via
> > i915 PMU.
> >
> > Thanks.
> > --
> > Ashutosh
> >
> >
> >>>
> >>> The i915 PMU I believe deduces busyness by sampling the RING_CTL register
> >>> using a timer. So these registers look better since you can get these
> >>> busyness values directly. On the other hand you can only get busyness for
> >>> an engine group and things like compute seem to be missing?
> >>
> >> The per engine busyness is a different thing we still need that and it
> >> has different implementation with GuC enabled, I believe Umesh is
> >> looking into that.
> >>
> >> compute group will still be accounted in XE_OAG_RENDER_BUSY_FREE and
> >> also under XE_OAG_RC0_ANY_ENGINE_BUSY_FREE.
> >>>
> >>> Also, would you know about plans to expose other kinds of busyness-es? I
> >>> think we may be exposing per-VF and also per-client busyness via PMU. Not
> >>> sure what else GuC can expose. Knowing all this we can better understand
> >>> how these particular busyness values will be used.
> >>
> >> ya, that shall be coming next probably from Umesh but per client
> >> busyness is through fdinfo.


More information about the Intel-xe mailing list