[PATCH v2 2/3] mm/gup.c: Migrate device coherent pages when pinning instead of failing

John Hubbard jhubbard at nvidia.com
Sat Feb 12 02:10:29 UTC 2022


On 2/6/22 20:26, Alistair Popple wrote:
> Currently any attempts to pin a device coherent page will fail. This is
> because device coherent pages need to be managed by a device driver, and
> pinning them would prevent a driver from migrating them off the device.
> 
> However this is no reason to fail pinning of these pages. These are
> coherent and accessible from the CPU so can be migrated just like
> pinning ZONE_MOVABLE pages. So instead of failing all attempts to pin
> them first try migrating them out of ZONE_DEVICE.
> 

Hi Alistair and all,

Here's a possible issue (below) that I really should have spotted the
first time around, sorry for this late-breaking review. And maybe it's
actually just my misunderstanding, anyway.


> Signed-off-by: Alistair Popple <apopple at nvidia.com>
> Acked-by: Felix Kuehling <Felix.Kuehling at amd.com>
> ---
> 
> Changes for v2:
> 
>   - Added Felix's Acked-by
>   - Fixed missing check for dpage == NULL
> 
>   mm/gup.c | 105 ++++++++++++++++++++++++++++++++++++++++++++++++++------
>   1 file changed, 95 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 56d9577..5e826db 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1861,6 +1861,60 @@ struct page *get_dump_page(unsigned long addr)
>   
>   #ifdef CONFIG_MIGRATION
>   /*
> + * Migrates a device coherent page back to normal memory. Caller should have a
> + * reference on page which will be copied to the new page if migration is
> + * successful or dropped on failure.
> + */
> +static struct page *migrate_device_page(struct page *page,
> +					unsigned int gup_flags)
> +{
> +	struct page *dpage;
> +	struct migrate_vma args;
> +	unsigned long src_pfn, dst_pfn = 0;
> +
> +	lock_page(page);
> +	src_pfn = migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE;
> +	args.src = &src_pfn;
> +	args.dst = &dst_pfn;
> +	args.cpages = 1;
> +	args.npages = 1;
> +	args.vma = NULL;
> +	migrate_vma_setup(&args);
> +	if (!(src_pfn & MIGRATE_PFN_MIGRATE))
> +		return NULL;
> +
> +	dpage = alloc_pages(GFP_USER | __GFP_NOWARN, 0);
> +
> +	/*
> +	 * get/pin the new page now so we don't have to retry gup after
> +	 * migrating. We already have a reference so this should never fail.
> +	 */
> +	if (dpage && WARN_ON_ONCE(!try_grab_page(dpage, gup_flags))) {
> +		__free_pages(dpage, 0);
> +		dpage = NULL;
> +	}
> +
> +	if (dpage) {
> +		lock_page(dpage);
> +		dst_pfn = migrate_pfn(page_to_pfn(dpage));
> +	}
> +
> +	migrate_vma_pages(&args);
> +	if (src_pfn & MIGRATE_PFN_MIGRATE)
> +		copy_highpage(dpage, page);
> +	migrate_vma_finalize(&args);
> +	if (dpage && !(src_pfn & MIGRATE_PFN_MIGRATE)) {
> +		if (gup_flags & FOLL_PIN)
> +			unpin_user_page(dpage);
> +		else
> +			put_page(dpage);
> +		dpage = NULL;
> +	}
> +
> +	return dpage;
> +}
> +
> +/*
>    * Check whether all pages are pinnable, if so return number of pages.  If some
>    * pages are not pinnable, migrate them, and unpin all pages. Return zero if
>    * pages were migrated, or if some pages were not successfully isolated.
> @@ -1888,15 +1942,40 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>   			continue;
>   		prev_head = head;
>   		/*
> -		 * If we get a movable page, since we are going to be pinning
> -		 * these entries, try to move them out if possible.
> +		 * Device coherent pages are managed by a driver and should not
> +		 * be pinned indefinitely as it prevents the driver moving the
> +		 * page. So when trying to pin with FOLL_LONGTERM instead try
> +		 * migrating page out of device memory.
>   		 */
>   		if (is_dev_private_or_coherent_page(head)) {
> +			/*
> +			 * device private pages will get faulted in during gup
> +			 * so it shouldn't be possible to see one here.
> +			 */
>   			WARN_ON_ONCE(is_device_private_page(head));
> -			ret = -EFAULT;
> -			goto unpin_pages;
> +			WARN_ON_ONCE(PageCompound(head));
> +
> +			/*
> +			 * migration will fail if the page is pinned, so convert
> +			 * the pin on the source page to a normal reference.
> +			 */
> +			if (gup_flags & FOLL_PIN) {
> +				get_page(head);
> +				unpin_user_page(head);

OK...but now gup_flags can no longer be used as a guide for how to
release these pages, right? In other words, up until this point,
FOLL_PIN meant "call unpin_user_page() in order to release". However,
now this page must be released via put_page().

See below...

> +			}
> +
> +			pages[i] = migrate_device_page(head, gup_flags);
> +			if (!pages[i]) {
> +				ret = -EBUSY;
> +				break;
> +			}
> +			continue;
>   		}
>   
> +		/*
> +		 * If we get a movable page, since we are going to be pinning
> +		 * these entries, try to move them out if possible.
> +		 */
>   		if (!is_pinnable_page(head)) {
>   			if (PageHuge(head)) {
>   				if (!isolate_huge_page(head, &movable_page_list))
> @@ -1924,16 +2003,22 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>   	 * If list is empty, and no isolation errors, means that all pages are
>   	 * in the correct zone.
>   	 */
> -	if (list_empty(&movable_page_list) && !isolation_error_count)
> +	if (!ret && list_empty(&movable_page_list) && !isolation_error_count)
>   		return nr_pages;
>   
> -unpin_pages:
> -	if (gup_flags & FOLL_PIN) {
> -		unpin_user_pages(pages, nr_pages);
> -	} else {
> -		for (i = 0; i < nr_pages; i++)
> +	for (i = 0; i < nr_pages; i++)
> +		if (!pages[i])
> +			continue;
> +		else if (gup_flags & FOLL_PIN)
> +			unpin_user_page(pages[i]);

...and here, for example, we are still trusting gup_flags to decide how
to release the pages.

This reminds me: out of the many things to monitor, the FOLL_PIN counts
in /proc/vmstat are especially helpful, whenever making changes to code
that deals with this:

	nr_foll_pin_acquired
	nr_foll_pin_released

...and those should normally be equal to each other when "at rest".


thanks,
-- 
John Hubbard
NVIDIA

> +		else
>   			put_page(pages[i]);
> +
> +	if (ret && !list_empty(&movable_page_list)) {
> +		putback_movable_pages(&movable_page_list);
> +		return ret;
>   	}
> +
>   	if (!list_empty(&movable_page_list)) {
>   		ret = migrate_pages(&movable_page_list, alloc_migration_target,
>   				    NULL, (unsigned long)&mtc, MIGRATE_SYNC,



More information about the amd-gfx mailing list