[PATCH] drm/xe: Use vmalloc for array of bind allocation in bind IOCTL

Matthew Brost matthew.brost at intel.com
Mon Feb 26 15:07:43 UTC 2024


On Mon, Feb 26, 2024 at 10:18:06AM +0100, Thomas Hellström wrote:
> Hi, Matt
> On Fri, 2024-02-23 at 11:37 -0800, Matthew Brost wrote:
> > Use vmalloc in effort to allow a user pass in a large number of binds
> > in
> > an IOCTL (mesa use case). Also use array allocations rather open
> > coding
> > the size calculation.
> 
> I still think allowing a large number of binds like this is a bit
> dangerous, because to avoid out-of-memory DOSes we will need to
> restrict the number of binds in some way. And if the UMDs can't
> gracefully handle the OOMS and retry with a smaller number of binds
> they will break. And we're not allowed to break the.....
> 

See below, with [1] and this change we match Nouveau.

> Did we end up keeping a max number of binds that we guarantee for
> forseeable future?
> 

Based on Paulo feedback I think we need to get [1] in 6.8 as without
that the array of binds is kinda useless. Also Nouveau has similar uAPI
and not such limits either and also uses vmalloc for these types of
allocations. With that I believe this patch should be in 6.8 too. If
this proves to be a problem we can fix this in future releases but if we
impose some limit I guess that can't really be changed as then it is
uAPI.

Matt

[1] https://patchwork.freedesktop.org/series/129923/

> /Thomas
> 
> 
> > 
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> > GPUs")
> > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_vm.c | 23 ++++++++++++-----------
> >  1 file changed, 12 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index e3bde897f6e8..45a12207ebf5 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -2778,8 +2778,9 @@ static int vm_bind_ioctl_check_args(struct
> > xe_device *xe,
> >  		u64 __user *bind_user =
> >  			u64_to_user_ptr(args->vector_of_binds);
> >  
> > -		*bind_ops = kmalloc(sizeof(struct drm_xe_vm_bind_op)
> > *
> > -				    args->num_binds, GFP_KERNEL);
> > +		*bind_ops = kvmalloc_array(args->num_binds,
> > +					   sizeof(struct
> > drm_xe_vm_bind_op),
> > +					   GFP_KERNEL);
> >  		if (!*bind_ops)
> >  			return -ENOMEM;
> >  
> > @@ -2869,7 +2870,7 @@ static int vm_bind_ioctl_check_args(struct
> > xe_device *xe,
> >  
> >  free_bind_ops:
> >  	if (args->num_binds > 1)
> > -		kfree(*bind_ops);
> > +		kvfree(*bind_ops);
> >  	return err;
> >  }
> >  
> > @@ -2957,13 +2958,13 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> >  	}
> >  
> >  	if (args->num_binds) {
> > -		bos = kcalloc(args->num_binds, sizeof(*bos),
> > GFP_KERNEL);
> > +		bos = kvcalloc(args->num_binds, sizeof(*bos),
> > GFP_KERNEL);
> >  		if (!bos) {
> >  			err = -ENOMEM;
> >  			goto release_vm_lock;
> >  		}
> >  
> > -		ops = kcalloc(args->num_binds, sizeof(*ops),
> > GFP_KERNEL);
> > +		ops = kvcalloc(args->num_binds, sizeof(*ops),
> > GFP_KERNEL);
> >  		if (!ops) {
> >  			err = -ENOMEM;
> >  			goto release_vm_lock;
> > @@ -3104,10 +3105,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> >  	for (i = 0; bos && i < args->num_binds; ++i)
> >  		xe_bo_put(bos[i]);
> >  
> > -	kfree(bos);
> > -	kfree(ops);
> > +	kvfree(bos);
> > +	kvfree(ops);
> >  	if (args->num_binds > 1)
> > -		kfree(bind_ops);
> > +		kvfree(bind_ops);
> >  
> >  	return err;
> >  
> > @@ -3131,10 +3132,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> >  	if (q)
> >  		xe_exec_queue_put(q);
> >  free_objs:
> > -	kfree(bos);
> > -	kfree(ops);
> > +	kvfree(bos);
> > +	kvfree(ops);
> >  	if (args->num_binds > 1)
> > -		kfree(bind_ops);
> > +		kvfree(bind_ops);
> >  	return err;
> >  }
> >  
> 


More information about the Intel-xe mailing list