[PATCH 1/2] mm/hmm.c: support automatic NUMA balancing

Kuehling, Felix Felix.Kuehling at amd.com
Tue May 28 22:44:35 UTC 2019


From: Philip Yang <Philip.Yang at amd.com>

While the page is migrating by NUMA balancing, HMM failed to detect this
condition and still return the old page.  Application will use the new
page migrated, but driver pass the old page physical address to GPU, this
crash the application later.

Use pte_protnone(pte) to return this condition and then hmm_vma_do_fault
will allocate new page.

Link: http://lkml.kernel.org/r/20190510195258.9930-2-Felix.Kuehling@amd.com
Signed-off-by: Philip Yang <Philip.Yang at amd.com>
Signed-off-by: Felix Kuehling <Felix.Kuehling at amd.com>
Reviewed-by: Jérôme Glisse <jglisse at redhat.com>
Cc: Alex Deucher <alex.deucher at amd.com>
Cc: Dave Airlie <airlied at gmail.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
---
 mm/hmm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index 0db8491090b8..599d8e82db67 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -559,7 +559,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
 
 static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte)
 {
-	if (pte_none(pte) || !pte_present(pte))
+	if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte))
 		return 0;
 	return pte_write(pte) ? range->flags[HMM_PFN_VALID] |
 				range->flags[HMM_PFN_WRITE] :
-- 
2.17.1



More information about the amd-gfx mailing list