HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

Christian König christian.koenig at amd.com
Thu Jan 14 15:08:21 UTC 2021


Am 14.01.21 um 15:23 schrieb Daniel Vetter:
> On Thu, Jan 14, 2021 at 3:13 PM Christian König
> <ckoenig.leichtzumerken at gmail.com> wrote:
>> Am 14.01.21 um 14:57 schrieb Daniel Vetter:
>>> On Thu, Jan 14, 2021 at 2:37 PM Christian König
>>> <christian.koenig at amd.com> wrote:
>>>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>>>> [SNIP]
>>>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>>>
>>>>>>> This does not solve the mmu notifier case, for this you would just
>>>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>>>> wait in that code path anymore). The userptr API never really made
>>>>>>> the contract that it will always be in sync with the mm view of the
>>>>>>> world so if different page get remapped to same virtual address
>>>>>>> while GPU is still working with the old pages it should not be an
>>>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>>>> what not).
>>>>>> The current working idea in my mind goes into a similar direction.
>>>>>>
>>>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>>>
>>>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>>>
>>>>>> When you handle an implicit or explicit synchronization request from
>>>>>> userspace you need to block for HMM fences to complete before taking any
>>>>>> resource locks.
>>>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>>>> really matter I think). By forcing that split we avoid the problem,
>>>>> but it means occasionally full stalls on mixed workloads.
>>>>>
>>>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>>>> is to track the reverse dependencies of all the fences floating
>>>>> around, and then skip evicting an object if you have to wait for any
>>>>> fence that is problematic for the current calling context. And I don't
>>>>> think that's very feasible in practice.
>>>>>
>>>>> So what kind of hmm fences do you have in mind here?
>>>> It's a bit more relaxed than your gang schedule.
>>>>
>>>> See the requirements are as follow:
>>>>
>>>> 1. dma_fences never depend on hmm_fences.
>>>> 2. hmm_fences can never preempt dma_fences.
>>>> 3. dma_fences must be able to preempt hmm_fences or we always reserve
>>>> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>>>>
>>>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>>>> dma_reservation object locks.
>>>>
>>>> 4. It is valid to wait for a dma_fences in critical sections.
>>>> 5. It is not valid to wait for hmm_fences in critical sections.
>>>>
>>>> Fence creation either happens during command submission or by adding
>>>> something like a barrier or signal command to your userspace queue.
>>>>
>>>> 6. If we have an hmm_fence as implicit or explicit dependency for
>>>> creating a dma_fence we must wait for that before taking any locks or
>>>> reserving resources.
>>>> 7. If we have a dma_fence as implicit or explicit dependency for
>>>> creating an hmm_fence we can wait later on. So busy waiting or special
>>>> WAIT hardware commands are valid.
>>>>
>>>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
>>>> same time on the hardware.
>>>>
>>>> In other words we can have a high priority gfx queue running jobs based
>>>> on dma_fences and a low priority compute queue running jobs based on
>>>> hmm_fences.
>>>>
>>>> Only when we switch from hmm_fence to dma_fence we need to block the
>>>> submission until all the necessary resources (both memory as well as
>>>> CUs) are available.
>>>>
>>>> This is somewhat an extension to your gang submit idea.
>>> Either I'm missing something, or this is just exactly what we
>>> documented already with userspace fences in general, and how you can't
>>> have a dma_fence depend upon a userspace (or hmm_fence).
>>>
>>> My gang scheduling idea is really just an alternative for what you
>>> have listed as item 3 above. Instead of requiring preempt or requiring
>>> guaranteed forward progress of some other sorts we flush out any
>>> pending dma_fence request. But _only_ those which would get stalled by
>>> the job we're running, so high-priority sdma requests we need in the
>>> kernel to shuffle buffers around are still all ok. This would be
>>> needed if you're hw can't preempt, and you also have shared engines
>>> between compute and gfx, so reserving CUs won't solve the problem
>>> either.
>>>
>>> What I don't mean with my gang scheduling is a completely exclusive
>>> mode between hmm_fence and dma_fence, since that would prevent us from
>>> using copy engines and dma_fence in the kernel to shuffle memory
>>> around for hmm jobs. And that would suck, even on compute-only
>>> workloads. Maybe I should rename "gang scheduling" to "engine flush"
>>> or something like that.
>> Yeah, "engine flush" makes it much more clearer.
>>
>> What I wanted to emphasis is that we have to mix dma_fences and
>> hmm_fences running at the same time on the same hardware fighting over
>> the same resources.
>>
>> E.g. even on the newest hardware multimedia engines can't handle page
>> faults, so video decoding/encoding will still produce dma_fences.
> Well we also have to mix them so the kernel can shovel data around
> using copy engines. Plus we have to mix it at the overall subsystem
> level because I'm not sure SoC-class gpus will ever get here,
> definitely aren't yet there for sure.
>
>>> I think the basics of userspace or hmm_fence or whatever we'll call it
>>> we've documented already here:
>>>
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fdriver-api%2Fdma-buf.html%3Fhighlight%3Ddma_fence%23indefinite-dma-fences&data=04%7C01%7Cchristian.koenig%40amd.com%7Cc35b65cf4ad5430475de08d8b897f5dd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637462310094850656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=GHBbLzmHPaW4sSZUrfKi6aNMAmYDbzgUMhZOOd1Im8E%3D&reserved=0
>> This talks about the restrictions we have for dma_fences and why
>> infinite fences (even as hmm_fence) will never work.
>>
>> But it doesn't talk about how to handle implicit or explicit
>> dependencies with something like hmm_fences.
>>
>> In other words my proposal above allows for hmm_fences to show up in
>> dma_reservation objects and are used together with all this explicit
>> synchronization we still have with only a medium amount of work :)
> Oh. I don't think we should put any hmm_fence or other infinite fence
> into a dma_resv object. At least not into the current dma_resv object,
> because then we have that infinite fences problem everywhere, and very
> hard to audit.

Yes, exactly. That's why this rules how to mix them or rather not mix them.

> What we could do is add new hmm_fence only slots for implicit sync,

Yeah, we would have them separated to the dma_fence objects.

> but I think consensus is that implicit sync is bad, never do it again.
> Last time around (for timeline syncobj) we've also pushed the waiting
> on cross-over to userspace, and I think that's the right option, so we
> need userspace to understand the hmm fence anyway. At that point we
> might as well bite the bullet and do another round of wayland/dri
> protocols.

As you said I don't see this happening in the next 5 years either.

So I think we have to somehow solve this in the kernel or we will go in 
circles all the time.

> So from that pov I think the kernel should at most deal with an
> hmm_fence for cross-process communication and maybe some standard wait
> primitives (for userspace to use, not for the kernel).
>
> The only use case this would forbid is using page faults for legacy
> implicit/explicit dma_fence synced workloads, and I think that's
> perfectly ok to not allow. Especially since the motivation here for
> all this is compute, and compute doesn't pass around dma_fences
> anyway.

As Alex said we will rather soon see this for gfx as well and we most 
likely will see combinations of old dma_fence based integrated graphics 
with new dedicated GPUs.

So I don't think we can say we reduce the problem to compute and don't 
support anything else.

Regards,
Christian.

>
>>> I think the only thing missing is clarifying a bit what you have under
>>> item 3, i.e. how do we make sure there's no accidental hidden
>>> dependency between hmm_fence and dma_fence. Maybe a subsection about
>>> gpu page fault handling?
>> The real improvement is item 6. The problem with it is that it requires
>> auditing all occasions when we create dma_fences so that we don't
>> accidentally depend on an HMM fence.
> We have that rule already, it's the "dma_fence must not depend upon an
> infinite fence anywhere" rule we documented last summer. So that
> doesn't feel new.
> -Daniel
>
>> Regards,
>> Christian.
>>
>>> Or are we still talking past each another a bit here?
>>> -Daniel
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>



More information about the dri-devel mailing list