[PATCH v2 7/8] drm/panfrost: Rework page table flushing and runtime PM interaction

Rob Herring robh at kernel.org
Fri Aug 23 15:49:15 UTC 2019


On Fri, Aug 23, 2019 at 10:09 AM Steven Price <steven.price at arm.com> wrote:
>
> On 23/08/2019 03:12, Rob Herring wrote:
> > There is no point in resuming the h/w just to do flush operations and
> > doing so takes several locks which cause lockdep issues with the shrinker.
> > Rework the flush operations to only happen when the h/w is already awake.
> > This avoids taking any locks associated with resuming.
> >
> > Cc: Tomeu Vizoso <tomeu.vizoso at collabora.com>
> > Cc: Steven Price <steven.price at arm.com>
> > Cc: Alyssa Rosenzweig <alyssa.rosenzweig at collabora.com>
> > Cc: David Airlie <airlied at linux.ie>
> > Cc: Daniel Vetter <daniel at ffwll.ch>
> > Signed-off-by: Rob Herring <robh at kernel.org>
>
> Reviewed-by: Steven Price <steven.price at arm.com>
>
> But one comment below...
>
> > ---
> > v2: new patch
> >
> >   drivers/gpu/drm/panfrost/panfrost_mmu.c | 41 ++++++++++++-------------
> >   1 file changed, 20 insertions(+), 21 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > index 842bdd7cf6be..ccf671a9c3fb 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > @@ -220,6 +220,23 @@ static size_t get_pgsize(u64 addr, size_t size)
> >       return SZ_2M;
> >   }
> >
> > +void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
> > +                           struct panfrost_mmu *mmu,
> > +                           u64 iova, size_t size)
> > +{
> > +     if (mmu->as < 0)
> > +             return;
> > +
> > +     /* Flush the PTs only if we're already awake */
> > +     if (!pm_runtime_get_if_in_use(pfdev->dev))
> > +             return;
> > +
> > +     mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);
> > +
> > +     pm_runtime_mark_last_busy(pfdev->dev);
>
> This isn't really a change, but: I'm not sure why we want to signal we
> were busy just because we had to do some cache maintenance? We might
> actually be faster leaving the GPU off so there's no need to do extra
> flushes on the GPU?

I don't know, good question. Powering up and down has its cost too. We
only get here if we were already active. A flush on a map probably is
going to be followed by a job. An unmap may be the end of activity or
not.

If we don't call pm_runtime_mark_last_busy, then maybe we also want to
do a pm_runtime_put_suspend (i.e. suspend immediately) instead. That
may only matter if we're the last one which could only happen during
this get/put. I'm not sure what happens if a suspend is requested
followed by an autosuspend.

Rob


More information about the dri-devel mailing list