[PATCH v3 5/8] mm: Device exclusive memory access
Ralph Campbell
rcampbell at nvidia.com
Mon Mar 1 22:55:48 UTC 2021
> From: Alistair Popple <apopple at nvidia.com>
> Sent: Thursday, February 25, 2021 11:18 PM
> To: linux-mm at kvack.org; nouveau at lists.freedesktop.org;
> bskeggs at redhat.com; akpm at linux-foundation.org
> Cc: linux-doc at vger.kernel.org; linux-kernel at vger.kernel.org; dri-
> devel at lists.freedesktop.org; John Hubbard <jhubbard at nvidia.com>; Ralph
> Campbell <rcampbell at nvidia.com>; jglisse at redhat.com; Jason Gunthorpe
> <jgg at nvidia.com>; hch at infradead.org; daniel at ffwll.ch; Alistair Popple
> <apopple at nvidia.com>
> Subject: [PATCH v3 5/8] mm: Device exclusive memory access
>
> Some devices require exclusive write access to shared virtual memory (SVM)
> ranges to perform atomic operations on that memory. This requires CPU page
> tables to be updated to deny access whilst atomic operations are occurring.
>
> In order to do this introduce a new swap entry type (SWP_DEVICE_EXCLUSIVE).
> When a SVM range needs to be marked for exclusive access by a device all page
> table mappings for the particular range are replaced with device exclusive swap
> entries. This causes any CPU access to the page to result in a fault.
>
> Faults are resovled by replacing the faulting entry with the original mapping. This
> results in MMU notifiers being called which a driver uses to update access
> permissions such as revoking atomic access. After notifiers have been called the
> device will no longer have exclusive access to the region.
>
> Signed-off-by: Alistair Popple <apopple at nvidia.com>
> ---
> Documentation/vm/hmm.rst | 15 ++++
> include/linux/rmap.h | 3 +
> include/linux/swap.h | 4 +-
> include/linux/swapops.h | 44 ++++++++++-
> mm/hmm.c | 5 ++
> mm/memory.c | 108 +++++++++++++++++++++++++-
> mm/mprotect.c | 8 ++
> mm/page_vma_mapped.c | 9 ++-
> mm/rmap.c | 163 +++++++++++++++++++++++++++++++++++++++
> 9 files changed, 352 insertions(+), 7 deletions(-)
...
> +int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
> + unsigned long end, struct page **pages) {
> + long npages = (end - start) >> PAGE_SHIFT;
> + long i;
Nit: you should use unsigned long for 'i' and 'npages' to match start/end.
More information about the dri-devel
mailing list