[PATCH hmm 2/8] mm/hmm: don't free the cached pgmap while scanning

Christoph Hellwig hch at lst.de
Mon Mar 16 09:02:50 UTC 2020


On Wed, Mar 11, 2020 at 03:35:00PM -0300, Jason Gunthorpe wrote:
> @@ -694,6 +672,15 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
>  			return -EBUSY;
>  		ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
>  				      &hmm_walk_ops, &hmm_vma_walk);
> +		/*
> +		 * A pgmap is kept cached in the hmm_vma_walk to avoid expensive
> +		 * searching in the probably common case that the pgmap is the
> +		 * same for the entire requested range.
> +		 */
> +		if (hmm_vma_walk.pgmap) {
> +			put_dev_pagemap(hmm_vma_walk.pgmap);
> +			hmm_vma_walk.pgmap = NULL;
> +		}
>  	} while (ret == -EBUSY);

In which case it should only be put on return, and not for every loop.

I still think the right fix is to just delete all the unused and broken
pgmap handling code.  If we ever need to add it back it can be added
in a proper understood and tested way.


More information about the amd-gfx mailing list