[Intel-xe] [PATCH v3 2/2] drm/xe/pmu: Enable PMU interface
Dixit, Ashutosh
ashutosh.dixit at intel.com
Fri Aug 11 03:38:45 UTC 2023
On Thu, 10 Aug 2023 14:55:41 -0700, Rodrigo Vivi wrote:
Hi Rodrigo/Aravind,
> On Thu, Aug 10, 2023 at 01:40:16PM +0530, Iddamsetty, Aravind wrote:
> > On 10-08-2023 08:10, Dixit, Ashutosh wrote:
> >
> > > On Wed, 09 Aug 2023 06:11:48 -0700, Iddamsetty, Aravind wrote:
> > >
> > >> On 09-08-2023 17:27, Iddamsetty, Aravind wrote:
> > >>> On 09-08-2023 15:25, Iddamsetty, Aravind wrote:
> > >>>> On 09-08-2023 12:58, Dixit, Ashutosh wrote:
> > >>>>> On Tue, 08 Aug 2023 04:54:36 -0700, Aravind Iddamsetty wrote:
> > >>>>>
> > >>>>> Spotted a few remaining things. See if it's possible to fix these up and
> > >>>>> send another version.
> > >>>>>
> > >>>>>> diff --git a/drivers/gpu/drm/xe/xe_pmu.c b/drivers/gpu/drm/xe/xe_pmu.c
> > >>>>>> new file mode 100644
> > >>>>>> index 000000000000..9637f8283641
> > >>>>>> --- /dev/null
> > >>>>>> +++ b/drivers/gpu/drm/xe/xe_pmu.c
> > >>>>>> @@ -0,0 +1,673 @@
> > >>>>
> > >>>> <snip>
> > >>>>>> +static u64 __engine_group_busyness_read(struct xe_gt *gt, int sample_type)
> > >>>>>> +{
> > >>>>>> + u64 val = 0;
> > >>>>>> +
> > >>>>>
> > >>>>> What is the forcewake domain for these registers? Don't we need to get
> > >>>>> forcewake before reading these. Something like:
> > >>>>>
> > >>>>> XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL));
> > >>>>
> > >>>> based on BSPEC:67609 these belong to GT power domain, so acquiring that
> > >>>> should be sufficient.
> > >>>
> > >>> But if i understand correctly taking forcewake is not allowed here as it
> > >>> is an atomic context and forcewake can sleep and that is what I'm seeing
> > >>> as well, might also be the reason why i915 didn't do that as well.
> > >>>
> > >>> [ 899.114316] BUG: sleeping function called from invalid context at
> > >>> kernel/locking/mutex.c:580
> > >>> [ 899.115768] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid:
> > >>> 290, name: kworker/27:1
> > >>
> > >> that is the reason why in i915 we were doing similar thing of storing
> > >> the counter as we enter rc6, not sure how do we do that in xe.
> > >
> > > Just to check, which code path(s) is/are aotmic context:
> > >
> > > a. xe_pm_suspend
> > > b. xe_pm_runtime_suspend
> > > c. xe_pmu_event_read
> >
> > pmu_event_read and runtime_suspend are atomic contexts.
>
> what about doing this at xe_pci_runtime_idle() ?
>
> This will run after the autosuspend time elapses,
> but before calling any suspend. Also, there's no requirement of
> that function to be in atomic context. So you could forcewake_get/put
> and stash your registers before we go runtime_suspend.
Thanks for the suggestion. Though Anshuman was saying that rpm_suspend
callback itself is not called in atomic context, Aravind seems to have made
a mistake. Aravind could you please confirm?
In any case there seems to be a way out here, we should work to save off
the registers while suspending in either the idle or suspend callbacks.
> > > Now I am wondering if GuC should provide these counters too along with
> > > other busyness values it provides, since GuC is what control RC6
> > > entry/exit. But let's try to understand the issue some more first.
> >
> > do you mean GuC reading these registers and presenting us in a way, will
> > need to think over how does it fit in the PMU.
I think better to leave GuC out since it's a long process to modify the GuC
API. So let's cancel that.
About xe_pmu_event_read being atomic context, since the registers might be
getting updated while xe_pmu_event_read calls are happening, the only way
out I am seeing is to run a kthread (specifically a delayed work item)
while PMU is active. The work item will run every 10 ms or so and save off
the registers (since we cannot take forcewake in xe_pmu_event_read and read
the registers). So this way we should be able to report register values
which are at most 10 ms old.
Any other ideas here?
Thanks.
--
Ashutosh
> > >>>>>
> > >>>>>> + switch (sample_type) {
> > >>>>>> + case __XE_SAMPLE_RENDER_GROUP_BUSY:
> > >>>>>> + val = xe_mmio_read32(gt, XE_OAG_RENDER_BUSY_FREE);
> > >>>>>> + break;
> > >>>>>> + case __XE_SAMPLE_COPY_GROUP_BUSY:
> > >>>>>> + val = xe_mmio_read32(gt, XE_OAG_BLT_BUSY_FREE);
> > >>>>>> + break;
> > >>>>>> + case __XE_SAMPLE_MEDIA_GROUP_BUSY:
> > >>>>>> + val = xe_mmio_read32(gt, XE_OAG_ANY_MEDIA_FF_BUSY_FREE);
> > >>>>>> + break;
> > >>>>>> + case __XE_SAMPLE_ANY_ENGINE_GROUP_BUSY:
> > >>>>>> + val = xe_mmio_read32(gt, XE_OAG_RC0_ANY_ENGINE_BUSY_FREE);
> > >>>>>> + break;
> > >>>>>> + default:
> > >>>>>> + drm_warn(>->tile->xe->drm, "unknown pmu event\n");
> > >>>>>> + }
> > >>>>>
> > >>>>> And similarly here:
> > >>>>>
> > >>>>> XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
More information about the Intel-xe
mailing list