[Linaro-mm-sig] [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea

Thomas Hellström (Intel) thomas_os at shipmail.org
Wed Jul 22 08:05:29 UTC 2020


On 2020-07-22 09:11, Daniel Vetter wrote:
> On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
> <thomas_os at shipmail.org> wrote:
>>
>> On 2020-07-22 00:45, Dave Airlie wrote:
>>> On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
>>> <thomas_os at shipmail.org> wrote:
>>>> On 7/21/20 9:45 AM, Christian König wrote:
>>>>> Am 21.07.20 um 09:41 schrieb Daniel Vetter:
>>>>>> On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
>>>>>> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 7/9/20 2:33 PM, Daniel Vetter wrote:
>>>>>>>> Comes up every few years, gets somewhat tedious to discuss, let's
>>>>>>>> write this down once and for all.
>>>>>>>>
>>>>>>>> What I'm not sure about is whether the text should be more explicit in
>>>>>>>> flat out mandating the amdkfd eviction fences for long running compute
>>>>>>>> workloads or workloads where userspace fencing is allowed.
>>>>>>> Although (in my humble opinion) it might be possible to completely
>>>>>>> untangle
>>>>>>> kernel-introduced fences for resource management and dma-fences used
>>>>>>> for
>>>>>>> completion- and dependency tracking and lift a lot of restrictions
>>>>>>> for the
>>>>>>> dma-fences, including prohibiting infinite ones, I think this makes
>>>>>>> sense
>>>>>>> describing the current state.
>>>>>> Yeah I think a future patch needs to type up how we want to make that
>>>>>> happen (for some cross driver consistency) and what needs to be
>>>>>> considered. Some of the necessary parts are already there (with like the
>>>>>> preemption fences amdkfd has as an example), but I think some clear docs
>>>>>> on what's required from both hw, drivers and userspace would be really
>>>>>> good.
>>>>> I'm currently writing that up, but probably still need a few days for
>>>>> this.
>>>> Great! I put down some (very) initial thoughts a couple of weeks ago
>>>> building on eviction fences for various hardware complexity levels here:
>>>>
>>>> https://gitlab.freedesktop.org/thomash/docs/-/blob/master/Untangling%20dma-fence%20and%20memory%20allocation.odt
>>> We are seeing HW that has recoverable GPU page faults but only for
>>> compute tasks, and scheduler without semaphores hw for graphics.
>>>
>>> So a single driver may have to expose both models to userspace and
>>> also introduces the problem of how to interoperate between the two
>>> models on one card.
>>>
>>> Dave.
>> Hmm, yes to begin with it's important to note that this is not a
>> replacement for new programming models or APIs, This is something that
>> takes place internally in drivers to mitigate many of the restrictions
>> that are currently imposed on dma-fence and documented in this and
>> previous series. It's basically the driver-private narrow completions
>> Jason suggested in the lockdep patches discussions implemented the same
>> way as eviction-fences.
>>
>> The memory fence API would be local to helpers and middle-layers like
>> TTM, and the corresponding drivers.  The only cross-driver-like
>> visibility would be that the dma-buf move_notify() callback would not be
>> allowed to wait on dma-fences or something that depends on a dma-fence.
> Because we can't preempt (on some engines at least) we already have
> the requirement that cross driver buffer management can get stuck on a
> dma-fence. Not even taking into account the horrors we do with
> userptr, which are cross driver no matter what. Limiting move_notify
> to memory fences only doesn't work, since the pte clearing might need
> to wait for a dma_fence first. Hence this becomes a full end-of-batch
> fence, not just a limited kernel-internal memory fence.

For non-preemptible hardware the memory fence typically *is* the 
end-of-batch fence. (Unless, as documented, there is a scheduler 
consuming sync-file dependencies in which case the memory fence wait 
needs to be able to break out of that). The key thing is not that we can 
break out of execution, but that we can break out of dependencies, since 
when we're executing all dependecies (modulo semaphores) are already 
fulfilled. That's what's eliminating the deadlocks.

>
> That's kinda why I think only reasonable option is to toss in the
> towel and declare dma-fence to be the memory fence (and suck up all
> the consequences of that decision as uapi, which is kinda where we
> are), and construct something new&entirely free-wheeling for userspace
> fencing. But only for engines that allow enough preempt/gpu page
> faulting to make that possible. Free wheeling userspace fences/gpu
> semaphores or whatever you want to call them (on windows I think it's
> monitored fence) only work if you can preempt to decouple the memory
> fences from your gpu command execution.
>
> There's the in-between step of just decoupling the batchbuffer
> submission prep for hw without any preempt (but a scheduler), but that
> seems kinda pointless. Modern execbuf should be O(1) fastpath, with
> all the allocation/mapping work pulled out ahead. vk exposes that
> model directly to clients, GL drivers could use it internally too, so
> I see zero value in spending lots of time engineering very tricky
> kernel code just for old userspace. Much more reasonable to do that in
> userspace, where we have real debuggers and no panics about security
> bugs (or well, a lot less, webgl is still a thing, but at least
> browsers realized you need to container that completely).

Sure, it's definitely a big chunk of work. I think the big win would be 
allowing memory allocation in dma-fence critical sections. But I 
completely buy the above argument. I just wanted to point out that many 
of the dma-fence restrictions are IMHO fixable, should we need to do 
that for whatever reason.

/Thomas


>
> Cheers, Daniel
>
>> So with that in mind, I don't foresee engines with different
>> capabilities on the same card being a problem.
>>
>> /Thomas
>>
>>
>


More information about the amd-gfx mailing list