[PATCH i-g-t v3 05/10] tests/intel/xe_svm: add huge page access test for SVM

Zeng, Oak oak.zeng at intel.com
Sat May 18 02:01:06 UTC 2024



> -----Original Message-----
> From: Bommu, Krishnaiah <krishnaiah.bommu at intel.com>
> Sent: Friday, May 17, 2024 7:47 AM
> To: igt-dev at lists.freedesktop.org
> Cc: Bommu, Krishnaiah <krishnaiah.bommu at intel.com>; Zeng, Oak
> <oak.zeng at intel.com>; Ghimiray, Himal Prasad
> <himal.prasad.ghimiray at intel.com>
> Subject: [PATCH i-g-t v3 05/10] tests/intel/xe_svm: add huge page access test
> for SVM
> 
> svm-huge-page verifies the Shared Virtual Memory (SVM) functionality
> by using huge page access. The test allocates 2MB of aligned memory
> to utilize huge pages and verifies GPU access to this memory.
> 
> Subtest:
> - svm-huge-page: Verifies that the GPU can correctly access memory allocated
>   as huge pages (2MB aligned).
> 
> Signed-off-by: Bommu Krishnaiah <krishnaiah.bommu at intel.com>
> Cc: Oak Zeng <oak.zeng at intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
> ---
>  tests/intel/xe_svm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
> 
> diff --git a/tests/intel/xe_svm.c b/tests/intel/xe_svm.c
> index 73a232c47..d9629246c 100644
> --- a/tests/intel/xe_svm.c
> +++ b/tests/intel/xe_svm.c
> @@ -30,6 +30,9 @@
>   *
>   * SUBTEST: svm-random-access
>   * Description: Verifies that the GPU can randomly access and correctly store
> values in malloc'ed memory.
> + *
> + * SUBTEST: svm-huge-page
> + * Description: verify SVM basic functionality by using huge page access
>   */
> 
>  #include <fcntl.h>
> @@ -156,6 +159,49 @@ static void svm_random_access(int fd, uint32_t vm,
> struct drm_xe_engine_class_in
>  	free(dst);
>  }
> 
> +/**
> + * Test the behavior of transparent huge page.
> + * Allocate 2MB of aligned memory so huge page
> + * is used. What happens if driver only migrate
> + * 1 4k page of that buffer to GPU?

On i915, we only migrate 1 4k page on gpu page fault. On xekmd, we changed this behavior. We migrate whole cpu vma for now. With the migration granularity work Himal is doing, we will migrate with migration granularity. The default granularity is 2MiB.

> + *
> + * Test result shows that the huge page is splitted
> + * into small pages and only one 4k page is
> + * migrated.
> + */
> +static void svm_thp(int fd, uint32_t vm, struct drm_xe_engine_class_instance
> *eci)
> +{
> +	uint64_t gpu_va = 0x1a0000;
> +	size_t bo_size = xe_bb_size(fd, PAGE_ALIGN_UFENCE);
> +	uint32_t size = 1<<21; //2MB
> +	uint32_t *dst;
> +	int ret;
> +
> +	struct xe_buffer cmd_buf = {
> +		.fd = fd,
> +		.gpu_addr = (void *)(uintptr_t)gpu_va,
> +		.vm = vm,
> +		.size = bo_size,
> +		.placement = vram_if_possible(fd, eci->gt_id),
> +		.flag = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM,
> +	};
> +
> +	ret = posix_memalign((void **)&dst, size, size);
> +	igt_assert_eq(ret, 0);
> +	/** TODO huge page advice cause the system hang
> +	* when process exit...Very possible a hmm bug
> +	**/

Not sure whether we still this issue or not. We will have to test to see.

Oak
> +	memset(dst, 0xbe, size);
> +
> +	xe_create_cmdbuf(&cmd_buf, insert_store, (uint64_t)dst, 0xc0ffee, eci);
> +	xe_submit_cmd(&cmd_buf);
> +
> +	igt_assert_eq(*dst, 0xc0ffee);
> +
> +	xe_destroy_cmdbuf(&cmd_buf);
> +	free(dst);
> +}
> +
>  igt_main
>  {
>  	int fd;
> @@ -184,6 +230,10 @@ igt_main
>  		xe_for_each_engine(fd, hwe)
>  			svm_random_access(fd, vm, hwe);
> 
> +	igt_subtest_f("svm-huge-page")
> +		xe_for_each_engine(fd, hwe)
> +			svm_thp(fd, vm, hwe);
> +
>  	igt_fixture {
>  		xe_vm_destroy(fd, vm);
>  		drm_close_driver(fd);
> --
> 2.25.1



More information about the igt-dev mailing list