[PATCH 01/10] mm: turn migrate_vma upside down
Jason Gunthorpe
jgg at mellanox.com
Fri Aug 16 17:11:07 UTC 2019
On Wed, Aug 14, 2019 at 09:59:19AM +0200, Christoph Hellwig wrote:
> There isn't any good reason to pass callbacks to migrate_vma. Instead
> we can just export the three steps done by this function to drivers and
> let them sequence the operation without callbacks. This removes a lot
> of boilerplate code as-is, and will allow the drivers to drastically
> improve code flow and error handling further on.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com>
> ---
> Documentation/vm/hmm.rst | 55 +-----
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 122 +++++++------
> include/linux/migrate.h | 118 ++----------
> mm/migrate.c | 244 +++++++++++--------------
> 4 files changed, 195 insertions(+), 344 deletions(-)
The mechanical transformation looks OK, I squashed the below nits in,
let me know if I got it wrong
Reviewed-by: Jason Gunthorpe <jgg at mellanox.com>
diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 4f81c77059e305..0a5960beccf76d 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -339,9 +339,8 @@ Migration to and from device memory
===================================
Because the CPU cannot access device memory, migration must use the device DMA
-engine to perform copy from and to device memory. For this we need a new to
-use migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize()
-helpers.
+engine to perform copy from and to device memory. For this we need to use
+migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() helpers.
Memory cgroup (memcg) and rss accounting
diff --git a/mm/migrate.c b/mm/migrate.c
index 993386cb53358d..0e78ebd720c0e4 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2622,10 +2622,9 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
* successfully migrated, and which were not. Successfully migrated pages will
* have the MIGRATE_PFN_MIGRATE flag set for their src array entry.
*
- * It is safe to update device page table from within the finalize_and_map()
- * callback because both destination and source page are still locked, and the
- * mmap_sem is held in read mode (hence no one can unmap the range being
- * migrated).
+ * It is safe to update device page table after migrate_vma_pages() because
+ * both destination and source page are still locked, and the mmap_sem is held
+ * in read mode (hence no one can unmap the range being migrated).
*
* Once the caller is done cleaning up things and updating its page table (if it
* chose to do so, this is not an obligation) it finally calls
@@ -2657,10 +2656,11 @@ int migrate_vma_setup(struct migrate_vma *args)
args->npages = 0;
migrate_vma_collect(args);
- if (args->cpages)
- migrate_vma_prepare(args);
- if (args->cpages)
- migrate_vma_unmap(args);
+ if (!args->cpages)
+ return 0;
+
+ migrate_vma_prepare(args);
+ migrate_vma_unmap(args);
/*
* At this point pages are locked and unmapped, and thus they have
More information about the dri-devel
mailing list