[PATCH 3/3] drm/panfrost: Stay in the threaded MMU IRQ handler until we've handled all IRQs

Steven Price steven.price at arm.com
Mon Feb 1 12:13:49 UTC 2021


On 01/02/2021 08:21, Boris Brezillon wrote:
> Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
> in the threaded irq handler as long as we can.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>

Looks fine to me, but I'm interested to know if you actually saw a 
performance improvement. Back-to-back MMU faults should (hopefully) be 
fairly uncommon.

Regardless:

Reviewed-by: Steven Price <steven.price at arm.com>

> ---
>   drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 21e552d1ac71..65bc20628c4e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -580,6 +580,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
>   	u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
>   	int i, ret;
>   
> +again:
> +
>   	for (i = 0; status; i++) {
>   		u32 mask = BIT(i) | BIT(i + 16);
>   		u64 addr;
> @@ -628,6 +630,11 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
>   		status &= ~mask;
>   	}
>   
> +	/* If we received new MMU interrupts, process them before returning. */
> +	status = mmu_read(pfdev, MMU_INT_RAWSTAT);
> +	if (status)
> +		goto again;
> +
>   	mmu_write(pfdev, MMU_INT_MASK, ~0);
>   	return IRQ_HANDLED;
>   };
> 



More information about the dri-devel mailing list