[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
Jason Gunthorpe
jgg at ziepe.ca
Fri Mar 20 13:41:09 UTC 2020
On Mon, Mar 16, 2020 at 08:32:16PM +0100, Christoph Hellwig wrote:
> diff --git a/mm/hmm.c b/mm/hmm.c
> index cfad65f6a67b..b75b3750e03d 100644
> +++ b/mm/hmm.c
> @@ -216,6 +216,14 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
> unsigned long end, uint64_t *pfns, pmd_t pmd);
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> +static inline bool hmm_is_device_private_entry(struct hmm_range *range,
> + swp_entry_t entry)
> +{
> + return is_device_private_entry(entry) &&
> + device_private_entry_to_page(entry)->pgmap->owner ==
> + range->dev_private_owner;
> +}
Thinking about this some more, does the locking work out here?
hmm_range_fault() runs with mmap_sem in read, and does not lock any of
the page table levels.
So it relies on accessing stale pte data being safe, and here we
introduce for the first time a page pointer dereference and a pgmap
dereference without any locking/refcounting.
The get_dev_pagemap() worked on the PFN and obtained a refcount, so it
created safety.
Is there some tricky reason this is safe, eg a DEVICE_PRIVATE page
cannot be removed from the vma without holding mmap_sem in write or
something?
Jason
More information about the amd-gfx
mailing list