<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 7/18/2024 6:27 PM, Matthew Auld
wrote:<br>
</div>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">On
04/07/2024 09:18, Nirmoy Das wrote:
<br>
<blockquote type="cite">On LNL because of flat CCS, driver creates
a migrate job to clear
<br>
CCS meta data. Extend that to also clear system pages using GPU.
<br>
Inform TTM to allocate pages without __GFP_ZERO to avoid double
page
<br>
clearing by clearing out TTM_TT_FLAG_ZERO_ALLOC flag and set
<br>
TTM_TT_FLAG_CLEARED_ON_FREE while freeing to skip ttm pool's
<br>
clearn-on-free as XE now takes care of clearing pages. If a bo
<br>
is in system placement and there is a cpu map then for such BO
gpu
<br>
clear will be avoided as there is no dma mapping for such BO.
<br>
<br>
To test the patch, created a small test that tries to submit a
job
<br>
after binding various sizes of buffer which shows good gains for
larger
<br>
buffer. For lower buffer sizes, the result is not very reliable
as the
<br>
results vary a lot.
<br>
<br>
With the patch
<br>
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
<br>
basic-store-benchmark
<br>
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+
x86_64)
<br>
Using IGT_SRANDOM=1719237905 for randomisation
<br>
Opened device: /dev/dri/card0
<br>
Starting subtest: basic-store-benchmark
<br>
Starting dynamic subtest: WC
<br>
Dynamic subtest WC: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 9493 us
<br>
Time taken for size SZ_2M: 5503 us
<br>
Time taken for size SZ_64M: 13016 us
<br>
Time taken for size SZ_128M: 29464 us
<br>
Time taken for size SZ_256M: 38408 us
<br>
Time taken for size SZ_1G: 148758 us
<br>
Starting dynamic subtest: WB
<br>
Dynamic subtest WB: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 3889 us
<br>
Time taken for size SZ_2M: 6091 us
<br>
Time taken for size SZ_64M: 20920 us
<br>
Time taken for size SZ_128M: 32394 us
<br>
Time taken for size SZ_256M: 61710 us
<br>
Time taken for size SZ_1G: 215437 us
<br>
Subtest basic-store-benchmark: SUCCESS (0.589s)
<br>
<br>
With the patch:
<br>
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
<br>
basic-store-benchmark
<br>
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+
x86_64)
<br>
Using IGT_SRANDOM=1719238062 for randomisation
<br>
Opened device: /dev/dri/card0
<br>
Starting subtest: basic-store-benchmark
<br>
Starting dynamic subtest: WC
<br>
Dynamic subtest WC: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 11803 us
<br>
Time taken for size SZ_2M: 4237 us
<br>
Time taken for size SZ_64M: 8649 us
<br>
Time taken for size SZ_128M: 14682 us
<br>
Time taken for size SZ_256M: 22156 us
<br>
Time taken for size SZ_1G: 74457 us
<br>
Starting dynamic subtest: WB
<br>
Dynamic subtest WB: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 5129 us
<br>
Time taken for size SZ_2M: 12563 us
<br>
Time taken for size SZ_64M: 14860 us
<br>
Time taken for size SZ_128M: 26064 us
<br>
Time taken for size SZ_256M: 47167 us
<br>
Time taken for size SZ_1G: 170304 us
<br>
Subtest basic-store-benchmark: SUCCESS (0.417s)
<br>
<br>
With the patch and init_on_alloc=0
<br>
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
<br>
basic-store-benchmark
<br>
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+
x86_64)
<br>
Using IGT_SRANDOM=1719238219 for randomisation
<br>
Opened device: /dev/dri/card0
<br>
Starting subtest: basic-store-benchmark
<br>
Starting dynamic subtest: WC
<br>
Dynamic subtest WC: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 4803 us
<br>
Time taken for size SZ_2M: 9212 us
<br>
Time taken for size SZ_64M: 9643 us
<br>
Time taken for size SZ_128M: 13479 us
<br>
Time taken for size SZ_256M: 22429 us
<br>
Time taken for size SZ_1G: 83110 us
<br>
Starting dynamic subtest: WB
<br>
Dynamic subtest WB: SUCCESS (0.000s)
<br>
Time taken for size SZ_4K: 4003 us
<br>
Time taken for size SZ_2M: 4443 us
<br>
Time taken for size SZ_64M: 12960 us
<br>
Time taken for size SZ_128M: 13741 us
<br>
Time taken for size SZ_256M: 26841 us
<br>
Time taken for size SZ_1G: 84746 us
<br>
Subtest basic-store-benchmark: SUCCESS (0.290s)
<br>
<br>
v2: Handle regression on dgfx(Himal)
<br>
Update commit message as no ttm API changes needed.
<br>
v3: Fix Kunit test.
<br>
v4: handle data leak on cpu mmap(Thomas)
<br>
<br>
Cc: Himal Prasad Ghimiray
<a class="moz-txt-link-rfc2396E" href="mailto:himal.prasad.ghimiray@intel.com"><himal.prasad.ghimiray@intel.com></a>
<br>
Cc: Matthew Auld <a class="moz-txt-link-rfc2396E" href="mailto:matthew.auld@intel.com"><matthew.auld@intel.com></a>
<br>
Cc: "Thomas Hellström" <a class="moz-txt-link-rfc2396E" href="mailto:thomas.hellstrom@linux.intel.com"><thomas.hellstrom@linux.intel.com></a>
<br>
Signed-off-by: Nirmoy Das <a class="moz-txt-link-rfc2396E" href="mailto:nirmoy.das@intel.com"><nirmoy.das@intel.com></a>
<br>
---
<br>
drivers/gpu/drm/xe/xe_bo.c | 25
++++++++++++++++++++++++-
<br>
drivers/gpu/drm/xe/xe_device.c | 7 +++++++
<br>
drivers/gpu/drm/xe/xe_device_types.h | 2 ++
<br>
3 files changed, 33 insertions(+), 1 deletion(-)
<br>
<br>
diff --git a/drivers/gpu/drm/xe/xe_bo.c
b/drivers/gpu/drm/xe/xe_bo.c
<br>
index 4d6315d2ae9a..b76a44fcf3b1 100644
<br>
--- a/drivers/gpu/drm/xe/xe_bo.c
<br>
+++ b/drivers/gpu/drm/xe/xe_bo.c
<br>
@@ -387,6 +387,13 @@ static struct ttm_tt
*xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
<br>
caching = ttm_uncached;
<br>
}
<br>
+ /* If the device can support gpu clear pages then set
proper ttm
<br>
</blockquote>
<br>
Nit: for multi-line block comment style is usually:
<br>
<br>
/*
<br>
* If the device
<br>
<br>
<blockquote type="cite">+ * flag. Zeroed pages are only
required for ttm_bo_type_device so
<br>
+ * unwanted data is leaked to userspace.
<br>
</blockquote>
<br>
not leaked?
<br>
</blockquote>
<p><br>
</p>
<p>Yes, should be not leaked.<br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+ */
<br>
+ if (ttm_bo->type == ttm_bo_type_device &&
xe->mem.gpu_page_clear)
<br>
+ page_flags |= TTM_TT_FLAG_CLEARED_ON_FREE;
<br>
+
<br>
err = ttm_tt_init(&tt->ttm, &bo->ttm,
page_flags, caching, extra_pages);
<br>
if (err) {
<br>
kfree(tt);
<br>
@@ -408,6 +415,10 @@ static int xe_ttm_tt_populate(struct
ttm_device *ttm_dev, struct ttm_tt *tt,
<br>
if (tt->page_flags & TTM_TT_FLAG_EXTERNAL)
<br>
return 0;
<br>
+ /* Clear TTM_TT_FLAG_ZERO_ALLOC when GPU is set to clear
pages */
<br>
+ if (tt->page_flags & TTM_TT_FLAG_CLEARED_ON_FREE)
<br>
+ tt->page_flags &= ~TTM_TT_FLAG_ZERO_ALLOC;
<br>
+
<br>
err = ttm_pool_alloc(&ttm_dev->pool, tt, ctx);
<br>
if (err)
<br>
return err;
<br>
@@ -653,6 +664,14 @@ static int xe_bo_move(struct
ttm_buffer_object *ttm_bo, bool evict,
<br>
int ret = 0;
<br>
+ /*
<br>
+ * Clear TTM_TT_FLAG_CLEARED_ON_FREE on bo creation path
when
<br>
+ * moving to system as the bo doesn't dma_mapping.
<br>
+ */
<br>
</blockquote>
<br>
Ok, so I guess with deferred allocation you always get CPU clear,
</blockquote>
<p>Yes</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">but
there is no funny business in the fault hander.
<br>
</blockquote>
<p>There is sort of one, so we have 3 cases: <br>
</p>
<p><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">
<p> 1 DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING + CPU mmap -->
CPU clear always<br>
</p>
<p><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">2 Not DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING +
CPU mmap --> GPU clear</span></span></p>
<p><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">I wasn't sure if this was correct but Thomas
confirmed that this is fine as "</span></span><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">cpu fault handler already waits for the kernel
fence that the xe_migrate_clear() attache</span></span><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">s"</span></span></p>
<p><span><span class="ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak" dir="ltr">3 Not DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING +
exec_buf/GPU access --> GPU clear<br>
</span></span></p>
</span></span></p>
<p><br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+ if (!old_mem && ttm
&& !ttm_tt_is_populated(ttm)) {
<br>
+ ttm->page_flags &= ~TTM_TT_FLAG_CLEARED_ON_FREE;
<br>
+ }
<br>
</blockquote>
<br>
Nit: Can drop the {}
<br>
</blockquote>
<p>+1</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+
<br>
/* Bo creation path, moving to system or TT. */
<br>
if ((!old_mem && ttm) &&
!handle_system_ccs) {
<br>
if (new_mem->mem_type == XE_PL_TT)
<br>
@@ -676,7 +695,8 @@ static int xe_bo_move(struct
ttm_buffer_object *ttm_bo, bool evict,
<br>
(!mem_type_is_vram(old_mem_type)
&& !tt_has_data);
<br>
needs_clear = (ttm && ttm->page_flags &
TTM_TT_FLAG_ZERO_ALLOC) ||
<br>
- (!ttm && ttm_bo->type ==
ttm_bo_type_device);
<br>
+ (!ttm && ttm_bo->type == ttm_bo_type_device)
||
<br>
+ (ttm && ttm->page_flags &
TTM_TT_FLAG_CLEARED_ON_FREE);
<br>
if (new_mem->mem_type == XE_PL_TT) {
<br>
ret = xe_tt_map_sg(ttm);
<br>
@@ -790,6 +810,9 @@ static int xe_bo_move(struct
ttm_buffer_object *ttm_bo, bool evict,
<br>
handle_system_ccs;
<br>
bool clear_bo_data =
mem_type_is_vram(new_mem->mem_type);
<br>
+ if (ttm && (ttm->page_flags &
TTM_TT_FLAG_CLEARED_ON_FREE))
<br>
+ clear_bo_data |= true;
<br>
+
<br>
fence = xe_migrate_clear(migrate, bo, new_mem,
<br>
clear_bo_data, clear_ccs);
<br>
}
<br>
diff --git a/drivers/gpu/drm/xe/xe_device.c
b/drivers/gpu/drm/xe/xe_device.c
<br>
index 03492fbcb8fb..7c682a53f06e 100644
<br>
--- a/drivers/gpu/drm/xe/xe_device.c
<br>
+++ b/drivers/gpu/drm/xe/xe_device.c
<br>
@@ -636,6 +636,13 @@ int xe_device_probe(struct xe_device *xe)
<br>
if (err)
<br>
goto err;
<br>
+ /**
<br>
</blockquote>
<br>
Nit: can drop the extra * since this is not actual kernel-doc.
<br>
</blockquote>
<p>+1</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+ * On iGFX device with flat CCS, we
clear CCS metadata, let's extend that
<br>
+ * and use GPU to clear pages as well.
<br>
+ */
<br>
+ if (xe_device_has_flat_ccs(xe) && !IS_DGFX(xe))
<br>
+ xe->mem.gpu_page_clear = true;
<br>
</blockquote>
<br>
Could potentially move this into xe_ttm_sys_mgr_init() ?
<br>
</blockquote>
<p><br>
</p>
<p>Yes, good idea.</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+
<br>
err = xe_vram_probe(xe);
<br>
if (err)
<br>
goto err;
<br>
diff --git a/drivers/gpu/drm/xe/xe_device_types.h
b/drivers/gpu/drm/xe/xe_device_types.h
<br>
index 3bca6d344744..28eaf2ab1f25 100644
<br>
--- a/drivers/gpu/drm/xe/xe_device_types.h
<br>
+++ b/drivers/gpu/drm/xe/xe_device_types.h
<br>
@@ -325,6 +325,8 @@ struct xe_device {
<br>
struct xe_mem_region vram;
<br>
/** @mem.sys_mgr: system TTM manager */
<br>
struct ttm_resource_manager sys_mgr;
<br>
+ /** @gpu_page_clear: clear pages offloaded to GPU */
<br>
</blockquote>
<br>
@mem.gpu_page_clear?
<br>
<br>
Should we maybe also rename this to gpu_page_clear_sys or similar?
Since we technically already "clear pages offloaded to GPU" for
vram.
<br>
</blockquote>
<p><br>
</p>
<p>Yes, gpu_page_clear_sys should make it more clear.</p>
<p><br>
</p>
<p>Thanks,</p>
<p>Nirmoy<br>
</p>
<blockquote type="cite" cite="mid:005ede82-0d72-4ad5-87a4-9a8bfdb81d2a@intel.com">
<br>
<blockquote type="cite">+ bool gpu_page_clear;
<br>
} mem;
<br>
/** @sriov: device level virtualization data */
<br>
</blockquote>
</blockquote>
</body>
</html>