[PATCH 1/6] dma-buf: add dynamic DMA-buf handling v13

Koenig, Christian Christian.Koenig at amd.com
Fri Jul 19 12:05:36 UTC 2019


Am 19.07.19 um 11:31 schrieb Daniel Vetter:
> On Fri, Jul 19, 2019 at 09:14:05AM +0000, Koenig, Christian wrote:
>> Am 19.07.19 um 10:57 schrieb Daniel Vetter:
>>> On Tue, Jul 16, 2019 at 04:21:53PM +0200, Christian König wrote:
>>>> Am 26.06.19 um 14:29 schrieb Daniel Vetter:
>>>> [SNIP]
>>> Well my mail here preceeded the entire amdkfd eviction_fence discussion.
>>> With that I'm not sure anymore, since we don't really need two approaches
>>> of the same thing. And if the plan is to move amdkfd over from the
>>> eviction_fence trick to using the invalidate callback here, then I think
>>> we might need some clarifications on what exactly that means.
>> Mhm, I thought that this was orthogonal. I mean the invalidation
>> callback for a buffer are independent from how the driver is going to
>> use it in the end.
>>
>> Or do you mean that we could use fences and save us from adding just
>> another mechanism for the same signaling thing?
>>
>> That could of course work, but I had the impression that you are not
>> very in favor of that.
> It won't, since you can either use the fence as the invalidate callback,
> or as a fence (for implicit sync). But not both.

Why not both? I mean implicit sync is an artifact you need to handle 
separately anyway.

> But I also don't think it's a good idea to have 2 invalidation mechanisms,
> and since we do have one merged in-tree already would be good to proof
> that the new one is up to the existing challenge.

Ok, how to proceed then? Should I fix up the implicit syncing of fences 
first? I've go a couple of ideas for that as well.

This way we won't have any driver specific definition of what the fences 
in a reservation object mean any more.

> For context: I spend way too much time reading ttm, amdgpu/kfd and i915-gem
> code and my overall impression is that everyone's just running around in
> opposite directions and it's one huge hairball of a mess. With a pretty
> even distribution of equally "eek this is horrible" but also "wow this is
> much better than what the other driver does". So that's why I'm even more
> on the "are we sure this is the right thing" train.

Totally agree on that, but we should also not make the mistake we have 
seen on Windows to try to force all drivers into a common memory management.

That didn't worked out that well in the end and I would rather go down 
the route of trying to move logic into separate components and backing 
off into driver specific logic if we found that common stuff doesn't work.

Christian.

> -Daniel



More information about the dri-devel mailing list