[RFC] Add DMA_RESV_USAGE flags

Christian König ckoenig.leichtzumerken at gmail.com
Thu May 20 10:50:37 UTC 2021


Am 20.05.21 um 09:55 schrieb Daniel Vetter:
> On Wed, May 19, 2021 at 5:48 PM Michel Dänzer <michel at daenzer.net> wrote:
>> On 2021-05-19 5:21 p.m., Jason Ekstrand wrote:
>>> On Wed, May 19, 2021 at 5:52 AM Michel Dänzer <michel at daenzer.net> wrote:
>>>> On 2021-05-19 12:06 a.m., Jason Ekstrand wrote:
>>>>> On Tue, May 18, 2021 at 4:17 PM Daniel Vetter <daniel at ffwll.ch> wrote:
>>>>>> On Tue, May 18, 2021 at 7:40 PM Christian König
>>>>>> <ckoenig.leichtzumerken at gmail.com> wrote:
>>>>>>> Am 18.05.21 um 18:48 schrieb Daniel Vetter:
>>>>>>>> On Tue, May 18, 2021 at 2:49 PM Christian König
>>>>>>>> <ckoenig.leichtzumerken at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> And as long as we are all inside amdgpu we also don't have any oversync,
>>>>>>>>> the issue only happens when we share dma-bufs with i915 (radeon and
>>>>>>>>> AFAIK nouveau does the right thing as well).
>>>>>>>> Yeah because then you can't use the amdgpu dma_resv model anymore and
>>>>>>>> have to use the one atomic helpers use. Which is also the one that
>>>>>>>> e.g. Jason is threathening to bake in as uapi with his dma_buf ioctl,
>>>>>>>> so as soon as that lands and someone starts using it, something has to
>>>>>>>> adapt _anytime_ you have a dma-buf hanging around. Not just when it's
>>>>>>>> shared with another device.
>>>>>>> Yeah, and that is exactly the reason why I will NAK this uAPI change.
>>>>>>>
>>>>>>> This doesn't works for amdgpu at all for the reasons outlined above.
>>>>>> Uh that's really not how uapi works. "my driver is right, everyone
>>>>>> else is wrong" is not how cross driver contracts are defined. If that
>>>>>> means a perf impact until you've fixed your rules, that's on you.
>>>>>>
>>>>>> Also you're a few years too late with nacking this, it's already uapi
>>>>>> in the form of the dma-buf poll() support.
>>>>> ^^  My fancy new ioctl doesn't expose anything that isn't already
>>>>> there.  It just lets you take a snap-shot of a wait instead of doing
>>>>> an active wait which might end up with more fences added depending on
>>>>> interrupts and retries.  The dma-buf poll waits on all fences for
>>>>> POLLOUT and only the exclusive fence for POLLIN.  It's already uAPI.
>>>> Note that the dma-buf poll support could be useful to Wayland compositors for the same purpose as Jason's new ioctl (only using client buffers which have finished drawing for an output frame, to avoid missing a refresh cycle due to client drawing), *if* it didn't work differently with amdgpu.
>>>>
>>>> Am I understanding correctly that Jason's new ioctl would also work differently with amdgpu as things stand currently? If so, that would be a real bummer and might hinder adoption of the ioctl by Wayland compositors.
>>> My new ioctl has identical semantics to poll().  It just lets you take
>>> a snapshot in time to wait on later instead of waiting on whatever
>>> happens to be set right now.  IMO, having identical semantics to
>>> poll() isn't something we want to change.
>> Agreed.
>>
>> I'd argue then that making amdgpu poll semantics match those of other drivers is a pre-requisite for the new ioctl, otherwise it seems unlikely that the ioctl will be widely adopted.
> This seems backwards, because that means useful improvements in all
> other drivers are stalled until amdgpu is fixed.

Well there is nothing to fix in amdgpu, what we need to is to come up 
with an DMA-buf implicit syncing model which works for everyone.

I've pointed this problem out at FOSDEM roughly 6 years ago, before 
DMA-buf was even merged upstream and way before amdgpu even existed. And 
the response was yeah, maybe we need to look at this as well.

Over the years I've mentioned now at least 5 times that this isn't going 
to work in some situations and came up with different approaches how to 
fix it.

And you still have the nerves to tell me that this isn't a problem and 
we should fix amdgpu instead? Sorry, but I'm really running out of ideas 
how to explain why this isn't working for everybody.

That amdgpu wants to be special is true, but it is a fundamental problem 
that we have designed the implicit sync in DMA-buf only around the needs 
of DRM drivers at that time instead of going a step back and saying hey 
what would be an approach which works for everyone.

You just need to apply my example from FOSDEM with ring buffers in a 
single BO to the DMA-buf implicit sync model and immediately see how it 
falls apart.

> I think we need agreement on what the rules are, reasonable plan to
> get there, and then that should be enough to unblock work in the wider
> community. Holding the community at large hostage because one driver
> is different is really not great.

Well forcing a drivers into a synchronization model not ideal for their 
hardware isn't great either.

The patches I provided at least clean up the naming convention and 
provide clean interfaces for iterating over the shared fence container. 
On top of that use case driven APIs can be implemented.

And yes I'm perfectly aware that this means that we need to touch all 
drivers and memory management handlers, but I'm pretty sure that 
untangling implicit synchronization from resource management is worth 
the effort no matter what changes to the sync model we are going to do.

Regards,
Christian.


> I've just finished the subsystem review of everything, and thus far
> only found some minor bugs without practical significance. I'll fix
> those and then send out a series.
> -Daniel



More information about the dri-devel mailing list