[PATCH 00/35] Add HMM-based SVM memory manager to KFD
Daniel Vetter
daniel at ffwll.ch
Thu Jan 14 11:52:03 UTC 2021
On Thu, Jan 14, 2021 at 11:49 AM Christian König
<ckoenig.leichtzumerken at gmail.com> wrote:
>
> Am 13.01.21 um 17:56 schrieb Jerome Glisse:
> > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> >> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> >>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >>>>> This is the first version of our HMM based shared virtual memory manager
> >>>>> for KFD. There are still a number of known issues that we're working through
> >>>>> (see below). This will likely lead to some pretty significant changes in
> >>>>> MMU notifier handling and locking on the migration code paths. So don't
> >>>>> get hung up on those details yet.
> >>>>>
> >>>>> But I think this is a good time to start getting feedback. We're pretty
> >>>>> confident about the ioctl API, which is both simple and extensible for the
> >>>>> future. (see patches 4,16) The user mode side of the API can be found here:
> >>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>>>>
> >>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >>>>> and some retry IRQ handling changes (32).
> >>>>>
> >>>>>
> >>>>> Known issues:
> >>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >>>>> * still working on some race conditions and random bugs
> >>>>> * performance is not great yet
> >>>> Still catching up, but I think there's another one for your list:
> >>>>
> >>>> * hmm gpu context preempt vs page fault handling. I've had a short
> >>>> discussion about this one with Christian before the holidays, and also
> >>>> some private chats with Jerome. It's nasty since no easy fix, much less
> >>>> a good idea what's the best approach here.
> >>> Do you have a pointer to that discussion or any more details?
> >> Essentially if you're handling an hmm page fault from the gpu, you can
> >> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> >> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> >> you can't preempt while you have that page fault pending. Two solutions:
> >>
> >> - your hw can (at least for compute ctx) preempt even when a page fault is
> >> pending
> >>
> >> - lots of screaming in trying to come up with an alternate solution. They
> >> all suck.
> >>
> >> Note that the dma_fence_wait is hard requirement, because we need that for
> >> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> >> management. Which is the current "ttm is self-limited to 50% of system
> >> memory" limitation Christian is trying to lift. So that's really not
> >> a restriction we can lift, at least not in upstream where we need to also
> >> support old style hardware which doesn't have page fault support and
> >> really has no other option to handle memory management than
> >> dma_fence_wait.
> >>
> >> Thread was here:
> >>
> >> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >>
> >> There's a few ways to resolve this (without having preempt-capable
> >> hardware), but they're all supremely nasty.
> >> -Daniel
> >>
> > I had a new idea, i wanted to think more about it but have not yet,
> > anyway here it is. Adding a new callback to dma fence which ask the
> > question can it dead lock ? Any time a GPU driver has pending page
> > fault (ie something calling into the mm) it answer yes, otherwise
> > no. The GPU shrinker would ask the question before waiting on any
> > dma-fence and back of if it gets yes. Shrinker can still try many
> > dma buf object for which it does not get a yes on associated fence.
> >
> > This does not solve the mmu notifier case, for this you would just
> > invalidate the gem userptr object (with a flag but not releasing the
> > page refcount) but you would not wait for the GPU (ie no dma fence
> > wait in that code path anymore). The userptr API never really made
> > the contract that it will always be in sync with the mm view of the
> > world so if different page get remapped to same virtual address
> > while GPU is still working with the old pages it should not be an
> > issue (it would not be in our usage of userptr for compositor and
> > what not).
>
> The current working idea in my mind goes into a similar direction.
>
> But instead of a callback I'm adding a complete new class of HMM fences.
>
> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> the dma_fences and HMM fences are ignored in container objects.
>
> When you handle an implicit or explicit synchronization request from
> userspace you need to block for HMM fences to complete before taking any
> resource locks.
Isnt' that what I call gang scheduling? I.e. you either run in HMM
mode, or in legacy fencing mode (whether implicit or explicit doesn't
really matter I think). By forcing that split we avoid the problem,
but it means occasionally full stalls on mixed workloads.
But that's not what Jerome wants (afaiui at least), I think his idea
is to track the reverse dependencies of all the fences floating
around, and then skip evicting an object if you have to wait for any
fence that is problematic for the current calling context. And I don't
think that's very feasible in practice.
So what kind of hmm fences do you have in mind here?
-Daniel
>
> Regards,
> Christian.
>
> >
> > Maybe i overlook something there.
> >
> > Cheers,
> > Jérôme
> >
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the amd-gfx
mailing list