[PATCH v1 2/5] misc: fastrpc: Move all remote heap allocations to a new list
Ekansh Gupta
ekansh.gupta at oss.qualcomm.com
Thu May 22 05:09:36 UTC 2025
On 5/19/2025 5:05 PM, Srinivas Kandagatla wrote:
> On 5/13/25 05:28, Ekansh Gupta wrote:
>> Remote heap allocations are not organized in a maintainable manner,
>> leading to potential issues with memory management. As the remote
>> heap allocations are maintained in fl mmaps list, the allocations
>> will go away if the audio daemon process is killed but there are
>> chances that audio PD might still be using the memory. Move all
>> remote heap allocations to a dedicated list where the entries are
>> cleaned only for user requests and subsystem shutdown.
>>
>> Fixes: 0871561055e66 ("misc: fastrpc: Add support for audiopd")
>> Cc: stable at kernel.org
>> Signed-off-by: Ekansh Gupta <ekansh.gupta at oss.qualcomm.com>
>> ---
>> drivers/misc/fastrpc.c | 93 ++++++++++++++++++++++++++++++++----------
>> 1 file changed, 72 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
>> index ca3721365ddc..d4e38b5e5e6c 100644
>> --- a/drivers/misc/fastrpc.c
>> +++ b/drivers/misc/fastrpc.c
>> @@ -273,10 +273,12 @@ struct fastrpc_channel_ctx {
>> struct kref refcount;
>> /* Flag if dsp attributes are cached */
>> bool valid_attributes;
>> + /* Flag if audio PD init mem was allocated */
>> + bool audio_init_mem;
>> u32 dsp_attributes[FASTRPC_MAX_DSP_ATTRIBUTES];
>> struct fastrpc_device *secure_fdevice;
>> struct fastrpc_device *fdevice;
>> - struct fastrpc_buf *remote_heap;
>> + struct list_head rhmaps;
> Other than Audiopd, do you see any other remote heaps getting added to
> this list?
With current implementation it is possible but that is not correct, I
will include a patch to restrict remote heap map and unmap requests to
audio daemon only.
>
>> struct list_head invoke_interrupted_mmaps;
>> bool secure;
>> bool unsigned_support;
>> @@ -1237,12 +1239,47 @@ static bool is_session_rejected(struct fastrpc_user *fl, bool unsigned_pd_reques
>> return false;
>> }
>>
>> +static void fastrpc_cleanup_rhmaps(struct fastrpc_channel_ctx *cctx)
>> +{
>> + struct fastrpc_buf *buf, *b;
>> + struct list_head temp_list;
>> + int err;
>> + unsigned long flags;
>> +
>> + INIT_LIST_HEAD(&temp_list);
>> +
>> + spin_lock_irqsave(&cctx->lock, flags);
>> + list_splice_init(&cctx->rhmaps, &temp_list);
>> + spin_unlock_irqrestore(&cctx->lock, flags);
> Can you explain why do we need to do this?
To avoid any locking issue. While clean-up, I'm replacing the rhmaps
list with an empty one under a lock and cleaning up the list without
any more locking.
>
>> +
>> + list_for_each_entry_safe(buf, b, &temp_list, node) {> + if (cctx->vmcount) {
>> + u64 src_perms = 0;
>> + struct qcom_scm_vmperm dst_perms;
>> + u32 i;
>> +
>> + for (i = 0; i < cctx->vmcount; i++)
>> + src_perms |= BIT(cctx->vmperms[i].vmid);
>> +
>> + dst_perms.vmid = QCOM_SCM_VMID_HLOS;
>> + dst_perms.perm = QCOM_SCM_PERM_RWX;
>> + err = qcom_scm_assign_mem(buf->phys, (u64)buf->size,
>> + &src_perms, &dst_perms, 1);
>> + if (err)
>> + continue;
> Memory leak here.
I don't see any better way to handle the failure here as the clean-up
is called when the channel is going down and there won't be any
way to free this memory if qcom_scm call fails?
Do you see any better way to address this? Or should I add a comment
highlighting this limitation?
>
>> + }
>> + fastrpc_buf_free(buf);
>> + }
>> +}
>> +
> --srini
More information about the dri-devel
mailing list