[PATCH 6/8] drm/panthor: Implement XGS queues
Simona Vetter
simona.vetter at ffwll.ch
Tue Sep 3 19:17:13 UTC 2024
On Wed, Aug 28, 2024 at 06:26:02PM +0100, Mihail Atanassov wrote:
> +int panthor_xgs_queue_create(struct panthor_file *pfile, u32 vm_id,
> + int eventfd_sync_update, u32 *handle)
> +{
> + struct panthor_device *ptdev = pfile->ptdev;
> + struct panthor_xgs_queue_pool *xgs_queue_pool = pfile->xgs_queues;
> + struct panthor_xgs_queue *queue;
> + struct drm_gpu_scheduler *drm_sched;
> + int ret;
> + int qid;
> +
> + queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> + if (!queue)
> + return -ENOMEM;
> +
> + kref_init(&queue->refcount);
> + INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> + INIT_WORK(&queue->release_work, xgs_queue_release_work);
> + queue->ptdev = ptdev;
> +
> + ret = drmm_mutex_init(&ptdev->base, &queue->lock);
This is guaranteed buggy, because you kzalloc queue, with it's own
refcount, but then tie the mutex cleanup to the entirely different
lifetime of the drm_device.
Just spotted this while reading around.
-Sima
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list