HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

Christian König ckoenig.leichtzumerken at gmail.com
Thu Jan 14 14:13:11 UTC 2021


Am 14.01.21 um 14:57 schrieb Daniel Vetter:
> On Thu, Jan 14, 2021 at 2:37 PM Christian König
> <christian.koenig at amd.com> wrote:
>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>> [SNIP]
>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>
>>>>> This does not solve the mmu notifier case, for this you would just
>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>> wait in that code path anymore). The userptr API never really made
>>>>> the contract that it will always be in sync with the mm view of the
>>>>> world so if different page get remapped to same virtual address
>>>>> while GPU is still working with the old pages it should not be an
>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>> what not).
>>>> The current working idea in my mind goes into a similar direction.
>>>>
>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>
>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>
>>>> When you handle an implicit or explicit synchronization request from
>>>> userspace you need to block for HMM fences to complete before taking any
>>>> resource locks.
>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>> really matter I think). By forcing that split we avoid the problem,
>>> but it means occasionally full stalls on mixed workloads.
>>>
>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>> is to track the reverse dependencies of all the fences floating
>>> around, and then skip evicting an object if you have to wait for any
>>> fence that is problematic for the current calling context. And I don't
>>> think that's very feasible in practice.
>>>
>>> So what kind of hmm fences do you have in mind here?
>> It's a bit more relaxed than your gang schedule.
>>
>> See the requirements are as follow:
>>
>> 1. dma_fences never depend on hmm_fences.
>> 2. hmm_fences can never preempt dma_fences.
>> 3. dma_fences must be able to preempt hmm_fences or we always reserve
>> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>>
>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>> dma_reservation object locks.
>>
>> 4. It is valid to wait for a dma_fences in critical sections.
>> 5. It is not valid to wait for hmm_fences in critical sections.
>>
>> Fence creation either happens during command submission or by adding
>> something like a barrier or signal command to your userspace queue.
>>
>> 6. If we have an hmm_fence as implicit or explicit dependency for
>> creating a dma_fence we must wait for that before taking any locks or
>> reserving resources.
>> 7. If we have a dma_fence as implicit or explicit dependency for
>> creating an hmm_fence we can wait later on. So busy waiting or special
>> WAIT hardware commands are valid.
>>
>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
>> same time on the hardware.
>>
>> In other words we can have a high priority gfx queue running jobs based
>> on dma_fences and a low priority compute queue running jobs based on
>> hmm_fences.
>>
>> Only when we switch from hmm_fence to dma_fence we need to block the
>> submission until all the necessary resources (both memory as well as
>> CUs) are available.
>>
>> This is somewhat an extension to your gang submit idea.
> Either I'm missing something, or this is just exactly what we
> documented already with userspace fences in general, and how you can't
> have a dma_fence depend upon a userspace (or hmm_fence).
>
> My gang scheduling idea is really just an alternative for what you
> have listed as item 3 above. Instead of requiring preempt or requiring
> guaranteed forward progress of some other sorts we flush out any
> pending dma_fence request. But _only_ those which would get stalled by
> the job we're running, so high-priority sdma requests we need in the
> kernel to shuffle buffers around are still all ok. This would be
> needed if you're hw can't preempt, and you also have shared engines
> between compute and gfx, so reserving CUs won't solve the problem
> either.
>
> What I don't mean with my gang scheduling is a completely exclusive
> mode between hmm_fence and dma_fence, since that would prevent us from
> using copy engines and dma_fence in the kernel to shuffle memory
> around for hmm jobs. And that would suck, even on compute-only
> workloads. Maybe I should rename "gang scheduling" to "engine flush"
> or something like that.

Yeah, "engine flush" makes it much more clearer.

What I wanted to emphasis is that we have to mix dma_fences and 
hmm_fences running at the same time on the same hardware fighting over 
the same resources.

E.g. even on the newest hardware multimedia engines can't handle page 
faults, so video decoding/encoding will still produce dma_fences.

> I think the basics of userspace or hmm_fence or whatever we'll call it
> we've documented already here:
>
> https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=dma_fence#indefinite-dma-fences

This talks about the restrictions we have for dma_fences and why 
infinite fences (even as hmm_fence) will never work.

But it doesn't talk about how to handle implicit or explicit 
dependencies with something like hmm_fences.

In other words my proposal above allows for hmm_fences to show up in 
dma_reservation objects and are used together with all this explicit 
synchronization we still have with only a medium amount of work :)

> I think the only thing missing is clarifying a bit what you have under
> item 3, i.e. how do we make sure there's no accidental hidden
> dependency between hmm_fence and dma_fence. Maybe a subsection about
> gpu page fault handling?

The real improvement is item 6. The problem with it is that it requires 
auditing all occasions when we create dma_fences so that we don't 
accidentally depend on an HMM fence.

Regards,
Christian.

>
> Or are we still talking past each another a bit here?
> -Daniel
>
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>



More information about the dri-devel mailing list