<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 2022-07-25 14:02, Felix Kuehling
wrote:<br>
</div>
<blockquote type="cite" cite="mid:3d31300f-4f32-1aa2-8c50-c2a4f79a6ca2@amd.com">
<br>
Am 2022-07-25 um 13:19 schrieb Philip Yang:
<br>
<blockquote type="cite">This will be used to split giant svm range
into smaller ranges, to
<br>
support VRAM overcommitment by giant range and improve GPU retry
fault
<br>
recover on giant range.
<br>
<br>
Signed-off-by: Philip Yang <a class="moz-txt-link-rfc2396E" href="mailto:Philip.Yang@amd.com"><Philip.Yang@amd.com></a>
<br>
---
<br>
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2 ++
<br>
drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 19
+++++++++++++++++++
<br>
drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 3 +++
<br>
3 files changed, 24 insertions(+)
<br>
<br>
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
<br>
index 9667015a6cbc..b1f87aa6138b 100644
<br>
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
<br>
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
<br>
@@ -1019,6 +1019,8 @@ int svm_migrate_init(struct amdgpu_device
*adev)
<br>
amdgpu_amdkfd_reserve_system_mem(SVM_HMM_PAGE_STRUCT_SIZE(size));
<br>
+ svm_range_set_max_pages(adev);
<br>
+
<br>
pr_info("HMM registered %ldMB device memory\n", size
>> 20);
<br>
return 0;
<br>
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
<br>
index b592aee6d9d6..098060147de6 100644
<br>
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
<br>
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
<br>
@@ -46,6 +46,11 @@
<br>
*/
<br>
#define AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING (2UL *
NSEC_PER_MSEC)
<br>
+/* Giant svm range split into smaller ranges based on this,
it is decided using
<br>
+ * minimum of all dGPU/APU 1/32 VRAM size, between 2MB to 1GB
and align to 2MB.
<br>
+ */
<br>
+uint64_t max_svm_range_pages;
<br>
+
<br>
struct criu_svm_metadata {
<br>
struct list_head list;
<br>
struct kfd_criu_svm_range_priv_data data;
<br>
@@ -1870,6 +1875,20 @@ static struct svm_range
*svm_range_clone(struct svm_range *old)
<br>
return new;
<br>
}
<br>
+void svm_range_set_max_pages(struct amdgpu_device *adev)
<br>
+{
<br>
+ uint64_t max_pages;
<br>
+ uint64_t pages;
<br>
+
<br>
+ /* 1/32 VRAM size in pages */
<br>
+ pages = adev->gmc.real_vram_size >> 17;
<br>
+ pages = clamp(pages, 1ULL << 9, 1ULL << 18);
<br>
+ max_pages = READ_ONCE(max_svm_range_pages);
<br>
+ max_pages = min_not_zero(max_pages, pages);
<br>
+ max_pages = ALIGN(max_pages, 1ULL << 9);
<br>
</blockquote>
<br>
In the next patch you use max_svm_range_pages as alignment
parameter in an ALIGN_DOWN macro. The ALIGN... macros assume that
the alignment is a power of two. Just aligning it with 2MB is not
enough.
<br>
<br>
</blockquote>
You are right, test with 6MB failed, change align to
rounddown_pow_of_two. <br>
<blockquote type="cite" cite="mid:3d31300f-4f32-1aa2-8c50-c2a4f79a6ca2@amd.com">I also
don't understand why you do the alignment after taking the
min_not_zero. If you assume that max_svm_range_pages was correctly
aligned before, you can just to the alignment to a power of two
before the min_not_zero call.
<br>
<br>
The READ_ONCE ... WRITE_ONCE is not atomic. It should work as long
as this function can't be called on multiple threads concurrently.
That is, it should work as long as GPU initialization or hotplug
is somehow serialized. I'm not sure whether that's the case. If
that is not assured, the proper way to do this is either with a
global or static spinlock or with a cmpxchg in a retry loop.
<br>
</blockquote>
<p>Looks like pci_probe is serialized on mGPUs, but yes it is not
clear from calling path as it is done in a scheduled work, change
to use cmpxchg in next version.</p>
<p>Regards,</p>
<p>Philip<br>
</p>
<blockquote type="cite" cite="mid:3d31300f-4f32-1aa2-8c50-c2a4f79a6ca2@amd.com">
<br>
Regards,
<br>
Felix
<br>
<br>
<br>
<blockquote type="cite">+ WRITE_ONCE(max_svm_range_pages,
max_pages);
<br>
+}
<br>
+
<br>
/**
<br>
* svm_range_add - add svm range and handle overlap
<br>
* @p: the range add to this process svms
<br>
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
<br>
index eab7f6d3b13c..9156b041ef17 100644
<br>
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
<br>
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
<br>
@@ -204,6 +204,9 @@ void
svm_range_list_lock_and_flush_work(struct svm_range_list *svms,
struct mm_s
<br>
#define KFD_IS_SVM_API_SUPPORTED(dev) ((dev)->pgmap.type !=
0)
<br>
void svm_range_bo_unref_async(struct svm_range_bo *svm_bo);
<br>
+
<br>
+void svm_range_set_max_pages(struct amdgpu_device *adev);
<br>
+
<br>
#else
<br>
struct kfd_process;
<br>
</blockquote>
</blockquote>
</body>
</html>