[PATCH i-g-t] tests/xe_vm: Report OOM for vm_bind ioctl under memory pressure

Dandamudi, Priyanka priyanka.dandamudi at intel.com
Wed Jul 30 15:03:53 UTC 2025



> -----Original Message-----
> From: Hellstrom, Thomas <thomas.hellstrom at intel.com>
> Sent: 30 July 2025 03:53 PM
> To: igt-dev at lists.freedesktop.org; Dandamudi, Priyanka
> <priyanka.dandamudi at intel.com>
> Subject: Re: [PATCH i-g-t] tests/xe_vm: Report OOM for vm_bind ioctl under
> memory pressure
> 
> On Tue, 2025-07-29 at 10:46 +0530, priyanka.dandamudi at intel.com wrote:
> > From: Priyanka Dandamudi <priyanka.dandamudi at intel.com>
> >
> > Add a test which create buffer objects on an LR vm and vm_binding
> > buffer objects in a loop until it reaches OOM.
> > This is to check that buffer objects on a single vm doesnot get
> > evicted and instead report with ENOMEM or ENOSPC in non fault mode.
> >
> > v2: When vm_bind fails with oom even before the memory allocation goes
> > beyond visible vram size. Test should still pass when VRAM is smaller
> > than expected may be due to leaked VRAM memory with a warning.
> > VM bind error could be ENOMEM or ENOSPC for oom. VM bind should
> happen
> > atleast once for sanity check purpose. Modified code to handle these
> > changes.
> >
> > Cc: Thomas Hellstrom <thomas.hellstrom at intel.com>
> > Signed-off-by: Priyanka Dandamudi <priyanka.dandamudi at intel.com>
> 
> Hi, Priyanka,
> 
> A couple of style comments below:
> 
> 
> > ---
> >  tests/intel/xe_vm.c | 89
> > +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 89 insertions(+)
> >
> > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c index
> > c1abb08bb..001134387 100644
> > --- a/tests/intel/xe_vm.c
> > +++ b/tests/intel/xe_vm.c
> > @@ -2368,6 +2368,89 @@ static void invalid_vm_id(int fd)
> >  	do_ioctl_err(fd, DRM_IOCTL_XE_VM_DESTROY, &destroy, ENOENT);
> >  }
> >
> > +/**
> > + * SUBTEST: out-of-memory
> > + * Description: Test if vm_bind ioctl results in oom
> > + * when creating and vm_binding buffer objects on an LR vm beyond
> > available visible vram size.
> > + * Functionality: oom
> > + * Test category: functionality test
> > + */
> > +static void test_oom(int fd)
> > +{
> > +#define USER_FENCE_VALUE 0xdeadbeefdeadbeefull #define BO_SIZE
> > +xe_bb_size(fd, SZ_512M) #define MAX_BUFS
> > +(int)(xe_visible_vram_size(fd, 0) / BO_SIZE)
> > +	uint64_t addr = 0x1a0000;
> > +	uint64_t vm_sync;
> > +	uint32_t bo[MAX_BUFS + 1];
> > +	uint32_t *data[MAX_BUFS + 1];
> > +	uint32_t vm;
> > +	struct drm_xe_sync sync[1] = {
> > +		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE, .flags =
> > DRM_XE_SYNC_FLAG_SIGNAL,
> > +		  .timeline_value = USER_FENCE_VALUE },
> > +	};
> > +	size_t bo_size = BO_SIZE;
> > +	int total_bufs = MAX_BUFS;
> > +	int bind_vm = 0;
> > +	bool oom = false;
> > +
> > +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0);
> > +	for (int iter = 0; iter <= total_bufs; iter++) {
> > +		int err = 0;
> > +		bo[iter] = xe_bo_create(fd, 0, bo_size,
> > +					vram_if_possible(fd, 0),
> > +					DRM_XE_GEM_CREATE_FLAG_DEFER
> > _BACKING |
> > +					DRM_XE_GEM_CREATE_FLAG_NEEDS
> > _VISIBLE_VRAM);
> > +
> > +		sync[0].addr = to_user_pointer(&vm_sync);
> > +		err = __xe_vm_bind(fd, vm, 0, bo[iter], 0,
> > +				   addr + bo_size * iter, bo_size,
> > +				   DRM_XE_VM_BIND_OP_MAP, 0, sync,
> > +				   1, 0, DEFAULT_PAT_INDEX, 0);
> > +
> > +		if (err) {
> > +			if (err == -ENOMEM || err == -ENOSPC) {
> > +				oom = true;
> > +				break;
> > +			}
> > +			else
> > +				igt_assert_f(err, "Unexpected error
> > %d for vm bind\n",
> > +					     err);
> > +		}
> > +		else
> > +			bind_vm = bind_vm + 1;
> 
> Not sure how strict igt coding style is but if it's the same as kernel coding style,
> then the else should be at the the same line as the closing bracket, and the
> else path should also be included in brackets.
> 
> 
> > +
> > +		xe_wait_ufence(fd, &vm_sync, USER_FENCE_VALUE, 0,
> > NSEC_PER_SEC);
> > +		vm_sync = 0;
> > +		data[iter] = xe_bo_map(fd, bo[iter], bo_size);
> > +		memset(data[iter], 0, bo_size);
> > +	}
> > +
> > +	igt_assert_f(oom, "OOM scenario is not working as
> > expected\n");
> > +
> > +	if (bind_vm < total_bufs) {
> > +		for (int iter = 0; iter < bind_vm; iter++) {
> > +			sync[0].addr = to_user_pointer(&vm_sync);
> > +			xe_vm_unbind_async(fd, vm, 0, 0, addr +
> > bo_size * iter, bo_size,
> > +					   sync, 1);
> > +			xe_wait_ufence(fd, &vm_sync,
> > USER_FENCE_VALUE, 0, NSEC_PER_SEC);
> > +			munmap(data[iter], bo_size);
> > +			gem_close(fd, bo[iter]);
> > +		}
> > +		igt_warn("VRAM was smaller than estimated, may due
> > to leaked VRAM memory\n");
> 
> Couldn't this be re-arranged like
> 	if (bind_vm < total_bufs)
> 		igt_warn("VRAM was smaller than estimated, may due to
> leaked VRAM memory\n);
> 	for (int iter = 0; iter < bind_vm; iter++) {...
> 
> 
> 
> > +	}
> > +	else {
> 
> 	So that this code-path is not needed:
> 
> Thanks,
> Thomas
Addressed all the comments in the next version.
> 
> 
> > +		for (int iter = 0; iter < total_bufs; iter++) {
> > +			sync[0].addr = to_user_pointer(&vm_sync);
> > +			xe_vm_unbind_async(fd, vm, 0, 0, addr +
> > bo_size * iter, bo_size,
> > +					   sync, 1);
> > +			xe_wait_ufence(fd, &vm_sync,
> > USER_FENCE_VALUE, 0, NSEC_PER_SEC);
> > +			munmap(data[iter], bo_size);
> > +			gem_close(fd, bo[iter]);
> > +		}
> > +	}
> > +}
> > +
> >  igt_main
> >  {
> >  	struct drm_xe_engine_class_instance *hwe, *hwe_non_copy = NULL;
> @@
> > -2759,6 +2842,12 @@ igt_main
> >  	igt_subtest("invalid-vm-id")
> >  		invalid_vm_id(fd);
> >
> > +	igt_subtest("out-of-memory") {
> > +		igt_require(xe_has_vram(fd));
> > +		igt_assert(xe_visible_vram_size(fd, 0));
> > +		test_oom(fd);
> > +	}
> > +
> >  	igt_fixture
> >  		drm_close_driver(fd);
> >  }



More information about the igt-dev mailing list