[PATCH v7 3/8] mm/rmap: Split try_to_munlock from try_to_unmap

Jason Gunthorpe jgg at nvidia.com
Tue Mar 30 18:49:03 UTC 2021


On Fri, Mar 26, 2021 at 11:08:00AM +1100, Alistair Popple wrote:

> +static bool try_to_munlock_one(struct page *page, struct vm_area_struct *vma,
> +		     unsigned long address, void *arg)
> +{

Is this function name right?

> +	struct page_vma_mapped_walk pvmw = {
> +		.page = page,
> +		.vma = vma,
> +		.address = address,
> +	};
> +
> +	/* munlock has nothing to gain from examining un-locked vmas */
> +	if (!(vma->vm_flags & VM_LOCKED))
> +		return true;
> +
> +	while (page_vma_mapped_walk(&pvmw)) {
> +		/* PTE-mapped THP are never mlocked */
> +		if (!PageTransCompound(page)) {
> +			/*
> +			 * Holding pte lock, we do *not* need
> +			 * mmap_lock here
> +			 */
> +			mlock_vma_page(page);

Because the only action this function seems to take is to call
*mlock*_vma_page()

> +		}
> +		page_vma_mapped_walk_done(&pvmw);
> +
> +		/* found a mlocked page, no point continuing munlock check */
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /**
>   * try_to_munlock - try to munlock a page
>   * @page: the page to be munlocked
> @@ -1796,8 +1821,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
>  void try_to_munlock(struct page *page)
>  {

But this is also called try_to_munlock ??

/**
 * try_to_munlock - try to munlock a page
 * @page: the page to be munlocked
 *
 * Called from munlock code.  Checks all of the VMAs mapping the page
 * to make sure nobody else has this page mlocked. The page will be
 * returned with PG_mlocked cleared if no other vmas have it mlocked.
 */

So what clears PG_mlocked on this call path?

Something needs attention here..

Jason


More information about the dri-devel mailing list