[RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked() kAPI
Christian König
christian.koenig at amd.com
Wed Jan 15 14:30:47 UTC 2025
Sending it as text mail to the mailing lists once more :(
Christian.
Am 15.01.25 um 15:29 schrieb Christian König:
> Am 15.01.25 um 15:14 schrieb Jason Gunthorpe:
>> On Wed, Jan 15, 2025 at 02:46:56PM +0100, Christian König wrote:
>> [SNIP]
>>>> Yeah, but it's private to the exporter. And a very fundamental rule of
>>>> DMA-buf is that the exporter is the one in control of things.
>> I've said a few times now, I don't think we can build the kind of
>> buffer sharing framework we need to solve all the problems with this
>> philosophy. It is also inefficient with the new DMA API.
>>
>> I think it is backwards looking and we need to move forwards with
>> fixing the fundamental API issues which motivated that design.
>
> And that's what I clearly see completely different.
>
> Those rules are not something we cam up with because of some
> limitation of the DMA-API, but rather from experience working with
> different device driver and especially their developers.
>
> Applying and enforcing those restrictions is absolutely mandatory must
> have for extending DMA-buf.
>
>>>> So for example it is illegal for an importer to setup CPU mappings to a
>>>> buffer. That's why we have dma_buf_mmap() which redirects mmap()
>>>> requests from the importer to the exporter.
>> Like this, in a future no-scatter list world I would want to make this
>> safe. The importer will have enough information to know if CPU
>> mappings exist and are safe to use under what conditions.
>>
>> There is no reason the importer should not be able to CPU access
>> memory that is HW permitted to be CPU accessible.
>>
>> If the importer needs CPU access and the exporter cannot provide it
>> then the attachment simply fails.
>>
>> Saying CPU access is banned 100% of the time is not a helpful position
>> when we have use cases that need it.
>
> That approach is an absolutely no-go from my side.
>
> We have fully intentionally implemented the restriction that importers
> can't CPU access DMA-buf for both kernel and userspace without going
> through the exporter because of design requirements and a lot of
> negative experience with exactly this approach.
>
> This is not something which is discuss-able in any way possible.
>
>>>> As far as I can see that is really not an use case which fits DMA-buf in
>>>> any way.
>> I really don't want to make a dmabuf2 - everyone would have to
>> implement it, including all the GPU drivers if they want to work with
>> RDMA. I don't think this makes any sense compared to incrementally
>> evolving dmabuf with more optional capabilities.
>
> The point is that a dmabuf2 would most likely be rejected as well or
> otherwise run into the same issues we have seen before.
>
>>>>>> That sounds more something for the TEE driver instead of anything DMA-buf
>>>>>> should be dealing with.
>>>>> Has nothing to do with TEE.
>>>> Why?
>> The Linux TEE framework is not used as part of confidential compute.
>>
>> CC already has guest memfd for holding it's private CPU memory.
>
> Where is that coming from and how it is used?
>
>> This is about confidential MMIO memory.
>
> Who is the exporter and who is the importer of the DMA-buf in this use
> case?
>
>> This is also not just about the KVM side, the VM side also has issues
>> with DMABUF and CC - only co-operating devices can interact with the
>> VM side "encrypted" memory and there needs to be a negotiation as part
>> of all buffer setup what the mutual capability is. :\ swiotlb hides
>> some of this some times, but confidential P2P is currently unsolved.
>
> Yes and it is documented by now how that is supposed to happen with
> DMA-buf.
>
> As far as I can see there is not much new approach here.
>
> Regards,
> Christian.
>
>> Jason
>
More information about the dri-devel
mailing list