[RFC] Implicit vs explicit user fence sync

Christian König ckoenig.leichtzumerken at gmail.com
Mon May 10 18:12:31 UTC 2021


Am 04.05.21 um 17:11 schrieb Daniel Vetter:
> On Tue, May 04, 2021 at 04:26:42PM +0200, Christian König wrote:
>> Hi Daniel,
>>
>> Am 04.05.21 um 16:15 schrieb Daniel Vetter:
>>> Hi Christian,
>>>
>>> On Tue, May 04, 2021 at 03:27:17PM +0200, Christian König wrote:
>>>> Hi guys,
>>>>
>>>> with this patch set I want to look into how much more additional work it
>>>> would be to support implicit sync compared to only explicit sync.
>>>>
>>>> Turned out that this is much simpler than expected since the only
>>>> addition is that before a command submission or flip the kernel and
>>>> classic drivers would need to wait for the user fence to signal before
>>>> taking any locks.
>>> It's a lot more I think
>>> - sync_file/drm_syncobj still need to be supported somehow
>> You need that with explicit fences as well.
>>
>> I'm just concentrating on what extra burden implicit sync would get us.
> It's not just implicit sync. Currently the best approach we have for
> explicit sync is hiding them in drm_syncobj. Because for that all the work
> with intentional stall points and userspace submit thread already exists.
>
> None of this work has been done for sync_file. And looking at how much
> work it was to get drm_syncobj going, that will be anything but easy.

I don't think we will want this for sync_file in the first place.

>>> - we need userspace to handle the stall in a submit thread at least
>>> - there's nothing here that sets the sync object
>>> - implicit sync isn't just execbuf, it's everything. E.g. the various
>>>     wait_bo ioctl also need to keep working, including timeout and
>>>     everything
>> Good point, but that should be relatively easily to add as well.
>>
>>> - we can't stall in atomic kms where you're currently stalling, that's for
>>>     sure. The uapi says "we're not stalling for fences in there", and you're
>>>     breaking that.
>> Again as far as I can see we run into the same problem with explicit sync.
>>
>> So the question is where could we block for atomic modeset for user fences
>> in general?
> Nah, I have an idea. But it only works if userspace is aware, because the
> rules are essentialyl:
>
> - when you supply a userspace in-fence, then you only get a userspace
>    out-fence
> - mixing in fences between dma-fence and user fence is ok
> - mixing out fences isn't
>
> And we currently do have sync_file out fence. So it's not possible to
> support implicit user fence in atomic in a way which doesn't break the
> uapi somewhere.
>
> Doing the explicit user fence support first will make that very obvious.
>
> And that's just the one ioctl I know is big trouble, I'm sure we'll find
> more funny corner cases when we roll out explicit user fencing.

I think we can just ignore sync_file. As far as it concerns me that UAPI 
is pretty much dead.

What we should support is drm_syncobj, but that also only as an in-fence 
since that's what our hardware supports.

> Anotherone that looks very sketchy right now is buffer sharing between
> different userspace drivers, like compute <-> media (if you have some
> fancy AI pipeline in your media workload, as an example).

Yeah, we are certainly going to get that. But only inside the same 
driver, so not much of a problem.

>
>>> - ... at this point I stopped pondering but there's definitely more
>>>
>>> Imo the only way we'll even get the complete is if we do the following:
>>> 1. roll out implicit sync with userspace fences on a driver-by-driver basis
> 		s/implicit/explicit/
>
> But I think you got that.
>
>>>      1a. including all the winsys/modeset stuff
>> Completely agree, that's why I've split that up into individual patches.
>>
>> I'm also fine if drivers can just opt out of user fence based
>> synchronization and we return an error from dma_buf_dynamic_attach() if some
>> driver says it can't handle that.
> Yeah, but that boils down to us just breaking those use-cases. Which is
> exactly what you're trying to avoid by rolling out implicit user fence I
> think.

But we can add support to all drivers as necessary.

>
>>> 2. roll out support for userspace fences to drm_syncobj timeline for
>>>      interop, both across process/userspace and across drivers
>>>      2a. including all the winsys/modeset stuff, but hopefully that's
>>>          largely solved with 1. already.
>> Correct, but again we need this for explicit fencing as well.
>>
>>> 3. only then try to figure out how to retroshoehorn this into implicit
>>>      sync, and whether that even makes sense.
>>>
>>> Because doing 3 before we've done 1&2 for at least 2 drivers (2 because
>>> interop fun across drivers) is just praying that this time around we're
>>> not collectively idiots and can correctly predict the future. That never
>>> worked :-)
>>>
>>>> For this prototype this patch set doesn't implement any user fence
>>>> synchronization at all, but just assumes that faulting user pages is
>>>> sufficient to make sure that we can wait for user space to finish
>>>> submitting the work. If necessary this can be made even more strict, the
>>>> only use case I could find which blocks this is the radeon driver and
>>>> that should be handle able.
>>>>
>>>> This of course doesn't give you the same semantic as the classic
>>>> implicit sync to guarantee that you have exclusive access to a buffers,
>>>> but this is also not necessary.
>>>>
>>>> So I think the conclusion should be that we don't need to concentrate on
>>>> implicit vs. explicit sync, but rather how to get the synchronization
>>>> and timeout signalling figured out in general.
>>> I'm not sure what exactly you're proving here aside from "it's possible to
>>> roll out a function with ill-defined semantics to all drivers". This
>>> really is a lot harder than just this one function and just this one patch
>>> set.
>> No it isn't. The hard part is getting the user sync stuff up in general.
>>
>> Adding implicit synchronization on top of that is then rather trivial.
> Well that's what I disagree with, since I already see some problems that I
> don't think we can overcome (the atomic ioctl is one). And that's with us
> only having a fairly theoretical understanding of the overall situation.

But how should we then ever support user fences with the atomic IOCTL?

We can't wait in user space since that will disable the support for 
waiting in the hardware.

Regards,
Christian.

>
> Like here at intel we have internal code for compute, and we're starting
> to hit some interesting cases with interop with media already, but that's
> it. Nothing even close to desktop/winsys/kms, and that's where I expect
> will all the pain be at.
>
> Cheers, Daniel



More information about the dri-devel mailing list