[PATCH v3 5/8] mm: Device exclusive memory access
Jason Gunthorpe
jgg at nvidia.com
Tue Mar 2 00:05:59 UTC 2021
On Fri, Feb 26, 2021 at 06:18:29PM +1100, Alistair Popple wrote:
> +/**
> + * make_device_exclusive_range() - Mark a range for exclusive use by a device
> + * @mm: mm_struct of assoicated target process
> + * @start: start of the region to mark for exclusive device access
> + * @end: end address of region
> + * @pages: returns the pages which were successfully mark for exclusive acces
> + *
> + * Returns: number of pages successfully marked for exclusive access
> + *
> + * This function finds the ptes mapping page(s) to the given address range and
> + * replaces them with special swap entries preventing userspace CPU access. On
> + * fault these entries are replaced with the original mapping after calling MMU
> + * notifiers.
> + */
> +int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
> + unsigned long end, struct page **pages)
> +{
> + long npages = (end - start) >> PAGE_SHIFT;
> + long i;
> +
> + npages = get_user_pages_remote(mm, start, npages,
> + FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD,
> + pages, NULL, NULL);
> + for (i = 0; i < npages; i++) {
> + if (!trylock_page(pages[i])) {
> + put_page(pages[i]);
> + pages[i] = NULL;
> + continue;
> + }
> +
> + if (!try_to_protect(pages[i])) {
Isn't this racy? get_user_pages returns the ptes at an instant in
time, they could have already been changed to something else?
I would think you'd want to switch to the swap entry atomically under
th PTLs?
Jason
More information about the dri-devel
mailing list