[PATCH v1] misc: fastrpc: Add support for multiple PD from one process
Ekansh Gupta
quic_ekangupt at quicinc.com
Thu Jul 4 06:17:22 UTC 2024
On 7/3/2024 4:09 PM, Greg KH wrote:
> On Wed, Jul 03, 2024 at 12:22:00PM +0530, Ekansh Gupta wrote:
>> @@ -268,6 +272,7 @@ struct fastrpc_channel_ctx {
>> struct fastrpc_session_ctx session[FASTRPC_MAX_SESSIONS];
>> spinlock_t lock;
>> struct idr ctx_idr;
>> + struct ida dsp_pgid_ida;
> You have an idr and an ida? Why two different types for the same
> driver?
Using ida for this because for this I just need to allocate and manage unique IDs
without any associated data. So this looks more space efficient that idr.
Should I keep it uniform for a driver?
>
>> struct list_head users;
>> struct kref refcount;
>> /* Flag if dsp attributes are cached */
>> @@ -299,6 +304,7 @@ struct fastrpc_user {
>> struct fastrpc_buf *init_mem;
>>
>> int tgid;
>> + int dsp_pgid;
> Are you sure this fits in an int?
I think this should be fine for IDs in rage of 1000-1064.
>
>> +static int fastrpc_pgid_alloc(struct fastrpc_channel_ctx *cctx)
>> +{
>> + int ret = -1;
> No need to initialize this.
I'll update this.
>
>> +
>> + /* allocate unique id between MIN_FRPC_PGID and MAX_FRPC_PGID */
>> + ret = ida_alloc_range(&cctx->dsp_pgid_ida, MIN_FRPC_PGID,
>> + MAX_FRPC_PGID, GFP_ATOMIC);
>> + if (ret < 0)
>> + return -1;
> Why is -1 a specific value here? Return a real error please.
> Or return 0 if that's not allowed.
Sure, will fix this in next spin.
>
> v
>> +
>> + return ret;
>> +}
>> +
>> static int fastrpc_device_open(struct inode *inode, struct file *filp)
>> {
>> struct fastrpc_channel_ctx *cctx;
>> @@ -1582,6 +1605,12 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
>> fl->cctx = cctx;
>> fl->is_secure_dev = fdevice->secure;
>>
>> + fl->dsp_pgid = fastrpc_pgid_alloc(cctx);
>> + if (fl->dsp_pgid == -1) {
>> + dev_dbg(&cctx->rpdev->dev, "too many fastrpc clients, max %u allowed\n", MAX_DSP_PD);
>> + return -EUSERS;
> Why -EUSERS?
This should be -EBUSY, I'll correct this.
>
> And you obviously did not test this as you just leaked memory :(
My bad, I ran basic fastrpc tests and the working of this use case. Sorry for the miss.
--Ekansh
>
> thanks,
>
> greg k-h
More information about the dri-devel
mailing list