[PATCH hmm 5/8] mm/hmm: add missing call to hmm_range_need_fault() before returning EFAULT

Ralph Campbell rcampbell at nvidia.com
Thu Mar 12 01:34:24 UTC 2020


On 3/11/20 11:35 AM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg at mellanox.com>
> 
> All return paths that do EFAULT must call hmm_range_need_fault() to
> determine if the user requires this page to be valid.
> 
> If the page cannot be made valid if the user later requires it, due to vma
> flags in this case, then the return should be HMM_PFN_ERROR.
> 
> Fixes: a3e0d41c2b1f ("mm/hmm: improve driver API to work and wait over a range")
> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com>

Reviewed-by: Ralph Campbell <rcampbell at nvidia.com>

> ---
>   mm/hmm.c | 19 ++++++++-----------
>   1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 5f5ccf13dd1e85..e10cd0adba7b37 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -582,18 +582,15 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
>   	struct vm_area_struct *vma = walk->vma;
>   
>   	/*
> -	 * Skip vma ranges that don't have struct page backing them or
> -	 * map I/O devices directly.
> -	 */
> -	if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
> -		return -EFAULT;
> -
> -	/*
> +	 * Skip vma ranges that don't have struct page backing them or map I/O
> +	 * devices directly.
> +	 *
>   	 * If the vma does not allow read access, then assume that it does not
> -	 * allow write access either. HMM does not support architectures
> -	 * that allow write without read.
> +	 * allow write access either. HMM does not support architectures that
> +	 * allow write without read.
>   	 */
> -	if (!(vma->vm_flags & VM_READ)) {
> +	if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) ||
> +	    !(vma->vm_flags & VM_READ)) {
>   		bool fault, write_fault;
>   
>   		/*
> @@ -607,7 +604,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
>   		if (fault || write_fault)
>   			return -EFAULT;
>   
> -		hmm_pfns_fill(start, end, range, HMM_PFN_NONE);
> +		hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
>   		hmm_vma_walk->last = end;
>   
>   		/* Skip this vma and continue processing the next vma. */
> 


More information about the amd-gfx mailing list