[PATCH v2 7/8] drm/panfrost: Rework page table flushing and runtime PM interaction

Rob Herring robh at kernel.org
Fri Aug 23 15:57:20 UTC 2019


On Fri, Aug 23, 2019 at 10:44 AM Robin Murphy <robin.murphy at arm.com> wrote:
>
> On 23/08/2019 16:05, Steven Price wrote:
> > On 23/08/2019 12:11, Robin Murphy wrote:
> >> On 23/08/2019 03:12, Rob Herring wrote:
> >>> There is no point in resuming the h/w just to do flush operations and
> >>> doing so takes several locks which cause lockdep issues with the
> >>> shrinker.
> >>> Rework the flush operations to only happen when the h/w is already
> >>> awake.
> >>> This avoids taking any locks associated with resuming.
> >>>
> >>> Cc: Tomeu Vizoso <tomeu.vizoso at collabora.com>
> >>> Cc: Steven Price <steven.price at arm.com>
> >>> Cc: Alyssa Rosenzweig <alyssa.rosenzweig at collabora.com>
> >>> Cc: David Airlie <airlied at linux.ie>
> >>> Cc: Daniel Vetter <daniel at ffwll.ch>
> >>> Signed-off-by: Rob Herring <robh at kernel.org>
> >>> ---
> >>> v2: new patch
> >>>
> >>>   drivers/gpu/drm/panfrost/panfrost_mmu.c | 41 ++++++++++++-------------
> >>>   1 file changed, 20 insertions(+), 21 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> >>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> >>> index 842bdd7cf6be..ccf671a9c3fb 100644
> >>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> >>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> >>> @@ -220,6 +220,23 @@ static size_t get_pgsize(u64 addr, size_t size)
> >>>       return SZ_2M;
> >>>   }
> >>> +void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
> >>> +                  struct panfrost_mmu *mmu,
> >>> +                  u64 iova, size_t size)
> >>> +{
> >>> +    if (mmu->as < 0)
> >>> +        return;
> >>> +
> >>> +    /* Flush the PTs only if we're already awake */
> >>> +    if (!pm_runtime_get_if_in_use(pfdev->dev))
> >>> +        return;
> >>
> >> Is the MMU guaranteed to be reset on resume such that the TLBs will
> >> always come up clean? Otherwise there are potentially corners where
> >> stale entries that we skip here might hang around if the hardware lies
> >> about powering things down.
> >
> > Assuming runtime PM has actually committed to the power off, then on
> > power on panfrost_device_resume() will be called which performs a reset
> > of the GPU which will clear the L2/TLBs so there shouldn't be any
> > problem there.
>
> OK, if panfrost_gpu_soft_reset() is sufficient to guarantee clean TLBs
> then this looks equivalent to what we did for arm-smmu, so I've no
> complaints in that regard.
>
> However on second look I've now noticed the panfrost_mmu_flush_range()
> calls being moved outside of mmu->lock protection. Forgive me if there's
> basic DRM knowledge I'm missing here, but is there any possibility for
> multiple threads to create/import/free objects simultaneously on the
> same FD such that two mmu_hw_do_operation() calls could race and
> interfere with each other in terms of the
> AS_LOCKADDR/AS_COMMAND/AS_STATUS dance?

Yes, we could have multiple threads. Not really any good reason it's
moved out of the mmu->lock other than just to avoid any nesting
(though that seemed fine in testing). The newly added as_lock will
serialize mmu_hw_do_operation(). So the mmu->lock is just serializing
page table writes.

Rob


More information about the dri-devel mailing list