[PATCH v2 hmm 7/9] mm/hmm: do not unconditionally set pfns when returning EBUSY
Jason Gunthorpe
jgg at ziepe.ca
Tue Mar 24 01:14:55 UTC 2020
From: Jason Gunthorpe <jgg at mellanox.com>
In hmm_vma_handle_pte() and hmm_vma_walk_hugetlb_entry() if fault happens
then -EBUSY will be returned and the pfns input flags will have been
destroyed.
For hmm_vma_handle_pte() set HMM_PFN_NONE only on the success returns that
don't otherwise store to pfns.
For hmm_vma_walk_hugetlb_entry() all exit paths already set pfns, so
remove the redundant store.
Fixes: 2aee09d8c116 ("mm/hmm: change hmm_vma_fault() to allow write fault on page basis")
Signed-off-by: Jason Gunthorpe <jgg at mellanox.com>
---
mm/hmm.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index e114110ad498a2..bf77b852f12d3a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -249,11 +249,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
pte_t pte = *ptep;
uint64_t orig_pfn = *pfn;
- *pfn = range->values[HMM_PFN_NONE];
if (pte_none(pte)) {
required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
if (required_fault)
goto fault;
+ *pfn = range->values[HMM_PFN_NONE];
return 0;
}
@@ -274,8 +274,10 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
}
required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
- if (!required_fault)
+ if (!required_fault) {
+ *pfn = range->values[HMM_PFN_NONE];
return 0;
+ }
if (!non_swap_entry(entry))
goto fault;
@@ -493,7 +495,6 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
i = (start - range->start) >> PAGE_SHIFT;
orig_pfn = range->pfns[i];
- range->pfns[i] = range->values[HMM_PFN_NONE];
cpu_flags = pte_to_hmm_pfn_flags(range, entry);
required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags);
if (required_fault) {
--
2.25.2
More information about the amd-gfx
mailing list