[Intel-xe] [PATCH v2 2/2] drm/xe/pmu: Enable PMU interface

Iddamsetty, Aravind aravind.iddamsetty at intel.com
Mon Jul 10 06:05:05 UTC 2023



On 08-07-2023 02:55, Dixit, Ashutosh wrote:
> On Fri, 07 Jul 2023 03:42:36 -0700, Iddamsetty, Aravind wrote:
>>
> 
> Hi Aravind,
> 
>> On 07-07-2023 11:38, Dixit, Ashutosh wrote:
>>> On Thu, 06 Jul 2023 20:53:47 -0700, Iddamsetty, Aravind wrote:
>>> I will look at the timing stuff later but one further question about the
>>> requirement:
>>>
>>>>> Also, could you please explain where the requirement to expose these OAG
>>>>> group busy/free registers via the PMU is coming from? Since these are OA
>>>>> registers presumably they can be collected using the OA subsystem.
>>>>
>>>> L0 sysman needs this
>>>> https://spec.oneapi.io/level-zero/latest/sysman/api.html#zes-engine-properties-t
>>>> and xpumanager uses this
>>>> https://github.com/intel/xpumanager/blob/master/core/src/device/gpu/gpu_device.cpp
>>>
>>> So fine these are UMD requirements, but why do these quantities (everything
>>> in this patch) have to exposed via PMU? I could just create sysfs or an
>>> ioctl to provide these to userland, right?
>>
>> PMU is enhanced interface to present the metrics, it provides low
>> latency reads compared to sysfs
> 
> Why lower latency compared to sysfs? In both cases we have user to kernel
> transitions and then register reads etc.

The sysfs will have to go through filesystem which adds latency but here
i think the most important aspect is requirement of read timestamps.

> 
>> and one can read multiple events in a single shot
> 
> Yes, this PMU can do and sysfs can't, though ioctl's can do this.
> 
>> and it will give timestamps as well which sysfs cannot provide and which
>> is one of the requirements of UMD.
> 
> Ioctl's can do this if implement (counter, timestamp) pairs, but I agree
> this may look strange so PMU does seem to have an advantage here.
> 
> But are these timestamps needed? The spec talks about different timestamp
> bases but in this case we have already converted to ns and I am wondering
> if the UMD can use it's own timestamps (maybe average of the ioctl call and
> return from ioctl) if UMD needs timestamps.

here i'm talking about read timestamps not the counter itself and when
we already have an interface(PMU) which can give these details why to do
duplicate effort in ioctl
> 
>> Also UMDs/ observability tools do not want to have any open handles to
>> get these info so ioctl is dropped out.
> 
> Why? This also I don't follow. And UMD has an perf pmu fd open. See
> igt at perf_pmu@module-unload e.g. which tests that module unload should fail
> if the perf pmu fd is open (which takes a ref count on the module).

here I'm referring to drm fd, one need not open drm fd to read via pmu,
and typically UMDs do not want to open drm fd as it takes device
reference and might toggle the device state(eg: wake device) when we are
trying to read some stats which is not needed.

> 
>> the other motivation to use PMU in xe is the existing tools like
>> intel_gpu_top will work with just a minor change.
> 
> Not too concerned about userspace tools. They can be changed to use a
> different interface.
> 
> So I am still not convinced xe needs to expose a PMU interface with these
> sort of "software events/counters". So my question is why can't we just
> have an ioctl to expose these things, why PMU?

firstly, PMU satisfies all requirements of UMD, requiring read
timestamps, multiple event read. So as we already have a time tested
interface is kernel why should we try to duplicate. secondly, using
ioctl one has to open drm fd which umds do not want.
> 
> Incidentally if you look at amdgpu_pmu.c, they seem to exposing some
> hardware sort of events through the PMU, not our kind of software stuff.

the counters that I'm exposing in this series are hardware counters itself.

> 
> Another interesting thing is if we have ftrace statements they seem to
> automatically be exposed by PMU
> (https://perf.wiki.kernel.org/index.php/Tutorial), e.g.:
> 
>   i915:i915_request_add                              [Tracepoint event]
>   i915:i915_request_queue                            [Tracepoint event]
>   i915:i915_request_retire                           [Tracepoint event]
>   i915:i915_request_wait_begin                       [Tracepoint event]
>   i915:i915_request_wait_end                         [Tracepoint event]
> 
> So I am wondering if this might be an option?

i'm little confused here how ftrace will expose any counters as it is
mostly for profiling?

Thanks,
Aravind.
> 
> So anyway let's try to understand the need for the PMU interface a bit more
> before deciding on this. Once we introduce the interface (a) people will
> willy nilly start exposing random stuff through that inteface (b) same
> stuff will get exposed via multiple interfaces (e.g. frequency and rc6
> residency in i915) etc. I am speaking on the basis of what I saw in i915.
> 
> Let's see if Tvrtko responds, otherwise I will try to get him on irc or
> something. It will be good to have some input from maybe one of the
> architects too about this.
> 
> Thanks.
> --
> Ashutosh
> 
>>> I had this same question about i915 PMU which was never answered. i915 PMU
>>> IMO does truly strange things like sample freq's every 5 ms and provides
>>> software averages which I thought userspace can easily do.
>>
>> that is a different thing nothing to do with PMU interface
>>
>> Thanks,
>> Aravind.
>>>
>>> I don't think it's the timestamps, maybe there is some convention related
>>> to the cpu pmu (which I am not familiar with).
>>>
>>> Let's see, maybe Tvrtko can also answer why these things were exposed via
>>> i915 PMU.
>>>
>>> Thanks.
>>> --
>>> Ashutosh
>>>
>>>
>>>>>
>>>>> The i915 PMU I believe deduces busyness by sampling the RING_CTL register
>>>>> using a timer. So these registers look better since you can get these
>>>>> busyness values directly. On the other hand you can only get busyness for
>>>>> an engine group and things like compute seem to be missing?
>>>>
>>>> The per engine busyness is a different thing we still need that and it
>>>> has different implementation with GuC enabled, I believe Umesh is
>>>> looking into that.
>>>>
>>>> compute group will still be accounted in XE_OAG_RENDER_BUSY_FREE and
>>>> also under XE_OAG_RC0_ANY_ENGINE_BUSY_FREE.
>>>>>
>>>>> Also, would you know about plans to expose other kinds of busyness-es? I
>>>>> think we may be exposing per-VF and also per-client busyness via PMU. Not
>>>>> sure what else GuC can expose. Knowing all this we can better understand
>>>>> how these particular busyness values will be used.
>>>>
>>>> ya, that shall be coming next probably from Umesh but per client
>>>> busyness is through fdinfo.


More information about the Intel-xe mailing list