[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
Ralph Campbell
rcampbell at nvidia.com
Sat May 2 00:53:26 UTC 2020
On 5/1/20 11:20 AM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg at mellanox.com>
>
> Presumably the intent here was that hmm_range_fault() could put the data
> into some HW specific format and thus avoid some work. However, nothing
> actually does that, and it isn't clear how anything actually could do that
> as hmm_range_fault() provides CPU addresses which must be DMA mapped.
>
> Perhaps there is some special HW that does not need DMA mapping, but we
> don't have any examples of this, and the theoretical performance win of
> avoiding an extra scan over the pfns array doesn't seem worth the
> complexity. Plus pfns needs to be scanned anyhow to sort out any
> DEVICE_PRIVATE pages.
>
> This version replaces the uint64_t with an usigned long containing a pfn
> and fixed flags. On input flags is filled with the HMM_PFN_REQ_* values,
> on successful output it is filled with HMM_PFN_* values, describing the
> state of the pages.
>
> amdgpu is simple to convert, it doesn't use snapshot and doesn't use
> per-page flags.
>
> nouveau uses only 16 hmm_pte entries at most (ie fits in a few cache
> lines), and it sweeps over its pfns array a couple of times anyhow. It
> also has a nasty call chain before it reaches the dma map and hardware
> suggesting performance isn't important:
>
> nouveau_svm_fault():
> args.i.m.method = NVIF_VMM_V0_PFNMAP
> nouveau_range_fault()
> nvif_object_ioctl()
> client->driver->ioctl()
> struct nvif_driver nvif_driver_nvkm:
> .ioctl = nvkm_client_ioctl
> nvkm_ioctl()
> nvkm_ioctl_path()
> nvkm_ioctl_v0[type].func(..)
> nvkm_ioctl_mthd()
> nvkm_object_mthd()
> struct nvkm_object_func nvkm_uvmm:
> .mthd = nvkm_uvmm_mthd
> nvkm_uvmm_mthd()
> nvkm_uvmm_mthd_pfnmap()
> nvkm_vmm_pfn_map()
> nvkm_vmm_ptes_get_map()
> func == gp100_vmm_pgt_pfn
> struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
> .pfn = gp100_vmm_pgt_pfn
> nvkm_vmm_iter()
> REF_PTES == func == gp100_vmm_pgt_pfn()
> dma_map_page()
>
> Acked-by: Felix Kuehling <Felix.Kuehling at amd.com>
> Tested-by: Ralph Campbell <rcampbell at nvidia.com>
> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> ---
> Documentation/vm/hmm.rst | 26 ++--
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 35 ++----
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 27 +---
> drivers/gpu/drm/nouveau/nouveau_dmem.h | 3 +-
> drivers/gpu/drm/nouveau/nouveau_svm.c | 87 ++++++++-----
> include/linux/hmm.h | 99 ++++++---------
> mm/hmm.c | 160 +++++++++++-------------
> 7 files changed, 192 insertions(+), 245 deletions(-)
>
...snip...
>
> +static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm,
> + struct hmm_range *range, u64 *ioctl_addr)
> +{
> + unsigned long i, npages;
> +
> + /*
> + * The ioctl_addr prepared here is passed through nvif_object_ioctl()
> + * to an eventual DMA map in something like gp100_vmm_pgt_pfn()
> + *
> + * This is all just encoding the internal hmm reprensetation into a
s/reprensetation/representation/
Looks good and still tests OK with nouveau.
More information about the amd-gfx
mailing list