[PATCH v1 13/15] mm: handling Non-LRU pages returned by vm_normal_pages

Jason Gunthorpe jgg at nvidia.com
Wed May 11 18:50:12 UTC 2022


On Thu, May 05, 2022 at 04:34:36PM -0500, Alex Sierra wrote:

> diff --git a/mm/memory.c b/mm/memory.c
> index 76e3af9639d9..892c4cc54dc2 100644
> +++ b/mm/memory.c
> @@ -621,6 +621,13 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>  		if (is_zero_pfn(pfn))
>  			return NULL;
>  		if (pte_devmap(pte))
> +/*
> + * NOTE: Technically this should goto check_pfn label. However, page->_mapcount
> + * is never incremented for device pages that are mmap through DAX mechanism
> + * using pmem driver mounted into ext4 filesystem. When these pages are unmap,
> + * zap_pte_range is called and vm_normal_page return a valid page with
> + * page_mapcount() = 0, before page_remove_rmap is called.
> + */
>  			return NULL;

? Where does this series cause device coherent to be returned?

Wasn't the plan to not set pte_devmap() ?

Jason


More information about the amd-gfx mailing list