[PATCH 00/35] Add HMM-based SVM memory manager to KFD

Christian König ckoenig.leichtzumerken at gmail.com
Fri Jan 8 14:45:47 UTC 2021


Am 08.01.21 um 15:40 schrieb Daniel Vetter:
> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>> This is the first version of our HMM based shared virtual memory manager
>>>> for KFD. There are still a number of known issues that we're working through
>>>> (see below). This will likely lead to some pretty significant changes in
>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>> get hung up on those details yet.
>>>>
>>>> But I think this is a good time to start getting feedback. We're pretty
>>>> confident about the ioctl API, which is both simple and extensible for the
>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>
>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>> and some retry IRQ handling changes (32).
>>>>
>>>>
>>>> Known issues:
>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>> * still working on some race conditions and random bugs
>>>> * performance is not great yet
>>> Still catching up, but I think there's another one for your list:
>>>
>>>   * hmm gpu context preempt vs page fault handling. I've had a short
>>>     discussion about this one with Christian before the holidays, and also
>>>     some private chats with Jerome. It's nasty since no easy fix, much less
>>>     a good idea what's the best approach here.
>> Do you have a pointer to that discussion or any more details?
> Essentially if you're handling an hmm page fault from the gpu, you can
> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> you can't preempt while you have that page fault pending. Two solutions:
>
> - your hw can (at least for compute ctx) preempt even when a page fault is
>    pending
>
> - lots of screaming in trying to come up with an alternate solution. They
>    all suck.
>
> Note that the dma_fence_wait is hard requirement, because we need that for
> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> management. Which is the current "ttm is self-limited to 50% of system
> memory" limitation Christian is trying to lift. So that's really not
> a restriction we can lift, at least not in upstream where we need to also
> support old style hardware which doesn't have page fault support and
> really has no other option to handle memory management than
> dma_fence_wait.
>
> Thread was here:
>
> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>
> There's a few ways to resolve this (without having preempt-capable
> hardware), but they're all supremely nasty.
> -Daniel
>
>> Thanks,
>>    Felix
>>
>>
>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>> -Daniel
>>>
>>>> Alex Sierra (12):
>>>>    drm/amdgpu: replace per_device_list by array
>>>>    drm/amdkfd: helper to convert gpu id and idx
>>>>    drm/amdkfd: add xnack enabled flag to kfd_process
>>>>    drm/amdkfd: add ioctl to configure and query xnack retries
>>>>    drm/amdkfd: invalidate tables on page retry fault
>>>>    drm/amdkfd: page table restore through svm API
>>>>    drm/amdkfd: SVM API call to restore page tables
>>>>    drm/amdkfd: add svm_bo reference for eviction fence
>>>>    drm/amdgpu: add param bit flag to create SVM BOs
>>>>    drm/amdkfd: add svm_bo eviction mechanism support
>>>>    drm/amdgpu: svm bo enable_signal call condition
>>>>    drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>
>>>> Philip Yang (23):
>>>>    drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>    drm/amdkfd: add svm ioctl API
>>>>    drm/amdkfd: Add SVM API support capability bits
>>>>    drm/amdkfd: register svm range
>>>>    drm/amdkfd: add svm ioctl GET_ATTR op
>>>>    drm/amdgpu: add common HMM get pages function
>>>>    drm/amdkfd: validate svm range system memory
>>>>    drm/amdkfd: register overlap system memory range
>>>>    drm/amdkfd: deregister svm range
>>>>    drm/amdgpu: export vm update mapping interface
>>>>    drm/amdkfd: map svm range to GPUs
>>>>    drm/amdkfd: svm range eviction and restore
>>>>    drm/amdkfd: register HMM device private zone
>>>>    drm/amdkfd: validate vram svm range from TTM
>>>>    drm/amdkfd: support xgmi same hive mapping
>>>>    drm/amdkfd: copy memory through gart table
>>>>    drm/amdkfd: HMM migrate ram to vram
>>>>    drm/amdkfd: HMM migrate vram to ram
>>>>    drm/amdgpu: reserve fence slot to update page table
>>>>    drm/amdgpu: enable retry fault wptr overflow
>>>>    drm/amdkfd: refine migration policy with xnack on
>>>>    drm/amdkfd: add svm range validate timestamp
>>>>    drm/amdkfd: multiple gpu migrate vram to vram
>>>>
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>   drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>   drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>   drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>   .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>   include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>   26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>
>>>> -- 
>>>> 2.29.2
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel at lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel



More information about the amd-gfx mailing list