[PATCH] drm/xe: Drop xe_mark_range_accessed in HMM layer

Thomas Hellström thomas.hellstrom at linux.intel.com
Wed Oct 9 07:36:49 UTC 2024


On Mon, 2024-09-09 at 11:21 -0700, Matthew Brost wrote:
> Not needed as hmm_range_fault does this and also as pages returned
> from
> hmm_range_fault could move while mmap lock is dropped and not not
> holding notifier lock. Page corruption showed up in similar code
> paths
> in SVM work.
> 
> Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr")
> Suggested-by: Simona Vetter <simona.vetter at ffwll.ch>
> Signed-off-by: Matthew Brost <matthew.brost at intel.com>

I wonder whether you can add something like "Write-enabled
hmm_range_fault() always ensures CPU ptes are marked dirty for the page
."

Reviewed-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>


> ---
>  drivers/gpu/drm/xe/xe_hmm.c | 25 -------------------------
>  1 file changed, 25 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_hmm.c
> b/drivers/gpu/drm/xe/xe_hmm.c
> index 2c32dc46f7d4..dde80a66c9aa 100644
> --- a/drivers/gpu/drm/xe/xe_hmm.c
> +++ b/drivers/gpu/drm/xe/xe_hmm.c
> @@ -19,30 +19,6 @@ static u64 xe_npages_in_range(unsigned long start,
> unsigned long end)
>  	return (end - start) >> PAGE_SHIFT;
>  }
>  
> -/*
> - * xe_mark_range_accessed() - mark a range is accessed, so core mm
> - * have such information for memory eviction or write back to
> - * hard disk
> - *
> - * @range: the range to mark
> - * @write: if write to this range, we mark pages in this range
> - * as dirty
> - */
> -static void xe_mark_range_accessed(struct hmm_range *range, bool
> write)
> -{
> -	struct page *page;
> -	u64 i, npages;
> -
> -	npages = xe_npages_in_range(range->start, range->end);
> -	for (i = 0; i < npages; i++) {
> -		page = hmm_pfn_to_page(range->hmm_pfns[i]);
> -		if (write)
> -			set_page_dirty_lock(page);
> -
> -		mark_page_accessed(page);
> -	}
> -}
> -
>  /*
>   * xe_build_sg() - build a scatter gather table for all the physical
> pages/pfn
>   * in a hmm_range. dma-map pages if necessary. dma-address is save
> in sg table
> @@ -242,7 +218,6 @@ int xe_hmm_userptr_populate_range(struct
> xe_userptr_vma *uvma,
>  	if (ret)
>  		goto free_pfns;
>  
> -	xe_mark_range_accessed(&hmm_range, write);
>  	userptr->sg = &userptr->sgt;
>  	userptr->notifier_seq = hmm_range.notifier_seq;
>  



More information about the Intel-xe mailing list