[PATCH 3/3] drm/panfrost: Stay in the threaded MMU IRQ handler until we've handled all IRQs
Boris Brezillon
boris.brezillon at collabora.com
Mon Feb 1 12:59:02 UTC 2021
On Mon, 1 Feb 2021 12:13:49 +0000
Steven Price <steven.price at arm.com> wrote:
> On 01/02/2021 08:21, Boris Brezillon wrote:
> > Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
> > in the threaded irq handler as long as we can.
> >
> > Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>
>
> Looks fine to me, but I'm interested to know if you actually saw a
> performance improvement. Back-to-back MMU faults should (hopefully) be
> fairly uncommon.
I actually didn't check the perf improvement or the actual number of
back-to-back MMU faults, but
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_combined_grid_1000x1000_drawcount_5000
seemed to generate a few of those, so I thought it'd be good to
optimize that case given how trivial it is.
>
> Regardless:
>
> Reviewed-by: Steven Price <steven.price at arm.com>
>
> > ---
> > drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > index 21e552d1ac71..65bc20628c4e 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > @@ -580,6 +580,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
> > u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
> > int i, ret;
> >
> > +again:
> > +
> > for (i = 0; status; i++) {
> > u32 mask = BIT(i) | BIT(i + 16);
> > u64 addr;
> > @@ -628,6 +630,11 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
> > status &= ~mask;
> > }
> >
> > + /* If we received new MMU interrupts, process them before returning. */
> > + status = mmu_read(pfdev, MMU_INT_RAWSTAT);
> > + if (status)
> > + goto again;
> > +
> > mmu_write(pfdev, MMU_INT_MASK, ~0);
> > return IRQ_HANDLED;
> > };
> >
>
More information about the dri-devel
mailing list