[PATCH 1/3] dma-buf/dma-fence_array: use kvzalloc

Tvrtko Ursulin tursulin at ursulin.net
Fri Oct 25 08:59:28 UTC 2024


On 24/10/2024 13:41, Christian König wrote:
> Reports indicates that some userspace applications try to merge more than
> 80k of fences into a single dma_fence_array leading to a warning from
> kzalloc() that the requested size becomes to big.
> 
> While that is clearly an userspace bug we should probably handle that case
> gracefully in the kernel.
> 
> So we can either reject requests to merge more than a reasonable amount of
> fences (64k maybe?) or we can start to use kvzalloc() instead of kzalloc().
> This patch here does the later.

Rejecting would potentially be safer, otherwise there is a path for 
userspace to trigger a warn in kvmalloc_node (see 0829b5bcdd3b 
("drm/i915: 2 GiB of relocations ought to be enough for anybody*")) and 
spam dmesg at will.

Question is what limit to set...

Regards,

Tvrtko

> Signed-off-by: Christian König <christian.koenig at amd.com>
> CC: stable at vger.kernel.org
> ---
>   drivers/dma-buf/dma-fence-array.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
> index 8a08ffde31e7..46ac42bcfac0 100644
> --- a/drivers/dma-buf/dma-fence-array.c
> +++ b/drivers/dma-buf/dma-fence-array.c
> @@ -119,8 +119,8 @@ static void dma_fence_array_release(struct dma_fence *fence)
>   	for (i = 0; i < array->num_fences; ++i)
>   		dma_fence_put(array->fences[i]);
>   
> -	kfree(array->fences);
> -	dma_fence_free(fence);
> +	kvfree(array->fences);
> +	kvfree_rcu(fence, rcu);
>   }
>   
>   static void dma_fence_array_set_deadline(struct dma_fence *fence,
> @@ -153,7 +153,7 @@ struct dma_fence_array *dma_fence_array_alloc(int num_fences)
>   {
>   	struct dma_fence_array *array;
>   
> -	return kzalloc(struct_size(array, callbacks, num_fences), GFP_KERNEL);
> +	return kvzalloc(struct_size(array, callbacks, num_fences), GFP_KERNEL);
>   }
>   EXPORT_SYMBOL(dma_fence_array_alloc);
>   


More information about the dri-devel mailing list