[Intel-xe] [PATCH v3 2/2] drm/xe/pmu: Enable PMU interface
Dixit, Ashutosh
ashutosh.dixit at intel.com
Mon Aug 14 02:02:10 UTC 2023
On Fri, 11 Aug 2023 12:17:17 -0700, Dixit, Ashutosh wrote:
>
> On Thu, 10 Aug 2023 23:17:47 -0700, Iddamsetty, Aravind wrote:
> >
Hi Aravind,
> > On 11-08-2023 09:08, Dixit, Ashutosh wrote:
> >
> > > On Thu, 10 Aug 2023 14:55:41 -0700, Rodrigo Vivi wrote:
> > >
> > >> On Thu, Aug 10, 2023 at 01:40:16PM +0530, Iddamsetty, Aravind wrote:
> > >>> On 10-08-2023 08:10, Dixit, Ashutosh wrote:
> > >>>
> > >>>> On Wed, 09 Aug 2023 06:11:48 -0700, Iddamsetty, Aravind wrote:
> > >>>>
> > >>>>> On 09-08-2023 17:27, Iddamsetty, Aravind wrote:
> > >>>>>> On 09-08-2023 15:25, Iddamsetty, Aravind wrote:
> > >>>>>>> On 09-08-2023 12:58, Dixit, Ashutosh wrote:
> > >>>>>>>> On Tue, 08 Aug 2023 04:54:36 -0700, Aravind Iddamsetty wrote:
> > >>>>>>>>
> > >>>>>>>> Spotted a few remaining things. See if it's possible to fix these up and
> > >>>>>>>> send another version.
> > >>>>>>>>
> > >>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_pmu.c b/drivers/gpu/drm/xe/xe_pmu.c
> > >>>>>>>>> new file mode 100644
> > >>>>>>>>> index 000000000000..9637f8283641
> > >>>>>>>>> --- /dev/null
> > >>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_pmu.c
> > >>>>>>>>> @@ -0,0 +1,673 @@
> > >>>>>>>
> > >>>>>>> <snip>
> > >>>>>>>>> +static u64 __engine_group_busyness_read(struct xe_gt *gt, int sample_type)
> > >>>>>>>>> +{
> > >>>>>>>>> + u64 val = 0;
> > >>>>>>>>> +
> > >>>>>>>>
> > >>>>>>>> What is the forcewake domain for these registers? Don't we need to get
> > >>>>>>>> forcewake before reading these. Something like:
> > >>>>>>>>
> > >>>>>>>> XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL));
> > >>>>>>>
> > >>>>>>> based on BSPEC:67609 these belong to GT power domain, so acquiring that
> > >>>>>>> should be sufficient.
> > >>>>>>
> > >>>>>> But if i understand correctly taking forcewake is not allowed here as it
> > >>>>>> is an atomic context and forcewake can sleep and that is what I'm seeing
> > >>>>>> as well, might also be the reason why i915 didn't do that as well.
> > >>>>>>
> > >>>>>> [ 899.114316] BUG: sleeping function called from invalid context at
> > >>>>>> kernel/locking/mutex.c:580
> > >>>>>> [ 899.115768] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid:
> > >>>>>> 290, name: kworker/27:1
> > >>>>>
> > >>>>> that is the reason why in i915 we were doing similar thing of storing
> > >>>>> the counter as we enter rc6, not sure how do we do that in xe.
> > >>>>
> > >>>> Just to check, which code path(s) is/are aotmic context:
> > >>>>
> > >>>> a. xe_pm_suspend
> > >>>> b. xe_pm_runtime_suspend
> > >>>> c. xe_pmu_event_read
> > >>>
> > >>> pmu_event_read and runtime_suspend are atomic contexts.
> > >>
> > >> what about doing this at xe_pci_runtime_idle() ?
> > >>
> > >> This will run after the autosuspend time elapses,
> > >> but before calling any suspend. Also, there's no requirement of
> > >> that function to be in atomic context. So you could forcewake_get/put
> > >> and stash your registers before we go runtime_suspend.
> > >
> > > Thanks for the suggestion. Though Anshuman was saying that rpm_suspend
> > > callback itself is not called in atomic context, Aravind seems to have made
> > > a mistake. Aravind could you please confirm?
> >
> > Yup i made a mistake runtime_suspend is not an atomic context, in actual
> > we are taking the forcewake in gt_suspend which calls the xe_pmu_suspend.
>
> OK, great, thanks for confirming.
>
> > >
> > > In any case there seems to be a way out here, we should work to save off
> > > the registers while suspending in either the idle or suspend callbacks.
> > >
> > >>>> Now I am wondering if GuC should provide these counters too along with
> > >>>> other busyness values it provides, since GuC is what control RC6
> > >>>> entry/exit. But let's try to understand the issue some more first.
> > >>>
> > >>> do you mean GuC reading these registers and presenting us in a way, will
> > >>> need to think over how does it fit in the PMU.
> > >
> > > I think better to leave GuC out since it's a long process to modify the GuC
> > > API. So let's cancel that.
> > >
> > > About xe_pmu_event_read being atomic context, since the registers might be
> > > getting updated while xe_pmu_event_read calls are happening, the only way
> > > out I am seeing is to run a kthread (specifically a delayed work item)
> > > while PMU is active. The work item will run every 10 ms or so and save off
> > > the registers (since we cannot take forcewake in xe_pmu_event_read and read
> > > the registers). So this way we should be able to report register values
> > > which are at most 10 ms old.
> >
> > there are two cases here:
>
> Correct.
>
> > 1. similar to suspend, we shall capture the register state before GT goes
> > to rc6 and that shall cover suspend case as well
> >
> > 2. in xe_pmu_event_read, we shall check if device is not in rc6 rather
> > than awake and read register but might have to take forcewake
> >
> > for first I do not think we have a way to know in Xe if GT is entering
> > rc6 (like gt_park in i915).
>
> But we can take forcewake here to either prevent GT from entering RC6 or
> bring it out of RC6 to read the registers.
>
> > even in kthread/timer we shall take forcewake only when GT is out of
>
> Timer is interrupt/atomic context so it cannot be used, it will have to be
> a kthread/workqueue.
>
> > rc6, but there is a problem our devices default autosuspend is in 1sec
> > so if we have a timer or kthread at msec the user shall also adjust the
> > autosuspend to ms or else device will never go to suspend as we take the
> > pm reference in mem_access_get.
>
> The kthread will only try to get forcewake (to read the registers) if the
> device is awake, so it will use xe_device_mem_access_get_if_ongoing. If
> device is not awake use previous stored values.
We should fix up the suspend code path (get forcewake in that code path)
for this patch but can defer the kernel thread for a future patch (if
needed). This will at least give UMD's something semi-functional to begin
integration with.
Ashutosh
> > > Any other ideas here?
> > >
> > > Thanks.
> > > --
> > > Ashutosh
> > >
> > >>>>>>>>
> > >>>>>>>>> + switch (sample_type) {
> > >>>>>>>>> + case __XE_SAMPLE_RENDER_GROUP_BUSY:
> > >>>>>>>>> + val = xe_mmio_read32(gt, XE_OAG_RENDER_BUSY_FREE);
> > >>>>>>>>> + break;
> > >>>>>>>>> + case __XE_SAMPLE_COPY_GROUP_BUSY:
> > >>>>>>>>> + val = xe_mmio_read32(gt, XE_OAG_BLT_BUSY_FREE);
> > >>>>>>>>> + break;
> > >>>>>>>>> + case __XE_SAMPLE_MEDIA_GROUP_BUSY:
> > >>>>>>>>> + val = xe_mmio_read32(gt, XE_OAG_ANY_MEDIA_FF_BUSY_FREE);
> > >>>>>>>>> + break;
> > >>>>>>>>> + case __XE_SAMPLE_ANY_ENGINE_GROUP_BUSY:
> > >>>>>>>>> + val = xe_mmio_read32(gt, XE_OAG_RC0_ANY_ENGINE_BUSY_FREE);
> > >>>>>>>>> + break;
> > >>>>>>>>> + default:
> > >>>>>>>>> + drm_warn(>->tile->xe->drm, "unknown pmu event\n");
> > >>>>>>>>> + }
> > >>>>>>>>
> > >>>>>>>> And similarly here:
> > >>>>>>>>
> > >>>>>>>> XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
More information about the Intel-xe
mailing list