[PATCH] drm/msm/a6xx: Fix excessive stack usage
Akhil P Oommen
quic_akhilpo at quicinc.com
Mon Oct 28 09:52:35 UTC 2024
On 10/28/2024 12:13 AM, Arnd Bergmann wrote:
> On Sun, Oct 27, 2024, at 18:05, Akhil P Oommen wrote:
>> Clang-19 and above sometimes end up with multiple copies of the large
>> a6xx_hfi_msg_bw_table structure on the stack. The problem is that
>> a6xx_hfi_send_bw_table() calls a number of device specific functions to
>> fill the structure, but these create another copy of the structure on
>> the stack which gets copied to the first.
>>
>> If the functions get inlined, that busts the warning limit:
>>
>> drivers/gpu/drm/msm/adreno/a6xx_hfi.c:631:12: error: stack frame size
>> (1032) exceeds limit (1024) in 'a6xx_hfi_send_bw_table'
>> [-Werror,-Wframe-larger-than]
>>
>> Fix this by kmalloc-ating struct a6xx_hfi_msg_bw_table instead of using
>> the stack. Also, use this opportunity to skip re-initializing this table
>> to optimize gpu wake up latency.
>>
>> Cc: Arnd Bergmann <arnd at kernel.org>
>
> Please change this to "Reported-by:"
Sure.
>
> The patch looks correct to me, just one idea for improvement.
>
>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
>> index 94b6c5cab6f4..b4a79f88ccf4 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
>> @@ -99,6 +99,7 @@ struct a6xx_gmu {
>> struct completion pd_gate;
>>
>> struct qmp *qmp;
>> + struct a6xx_hfi_msg_bw_table *bw_table;
>> };
>
> I think the bw_table is better just embedded
> in here rather than referenced as a pointer:
>
There are some low tier chipsets with relatively lower RAM size that
doesn't require this table. So, dynamically allocating this here helps
to save 640 bytes (minus the overhead of tracking).
-Akhil
>> + if (gmu->bw_table)
>> + goto send;
>> +
>> + msg = devm_kzalloc(gmu->dev, sizeof(*msg), GFP_KERNEL);
>> + if (!msg)
>> + return -ENOMEM;
>
> It looked like it's always allocated here when the device
> is up, so you can avoid the extra overhead for keeping
> track of the allocation.
>
> Arnd
More information about the dri-devel
mailing list