[RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked() kAPI
Christian König
christian.koenig at amd.com
Wed Jan 15 13:46:56 UTC 2025
Explicitly replying as text mail once more.
I just love the AMD mails servers :(
Christian.
Am 15.01.25 um 14:45 schrieb Christian König:
> Am 15.01.25 um 14:38 schrieb Jason Gunthorpe:
>> On Wed, Jan 15, 2025 at 10:38:00AM +0100, Christian König wrote:
>>> Am 10.01.25 um 21:54 schrieb Jason Gunthorpe:
>>>> [SNIP]
>>>>>> I don't fully understand your use case, but I think it's quite likely
>>>>>> that we already have that working.
>>>> In Intel CC systems you cannot mmap secure memory or the system will
>>>> take a machine check.
>>>>
>>>> You have to convey secure memory inside a FD entirely within the
>>>> kernel so that only an importer that understands how to handle secure
>>>> memory (such as KVM) is using it to avoid machine checking.
>>>>
>>>> The patch series here should be thought of as the first part of this,
>>>> allowing PFNs to flow without VMAs. IMHO the second part of preventing
>>>> machine checks is not complete.
>>>>
>>>> In the approach I have been talking about the secure memory would be
>>>> represented by a p2p_provider structure that is incompatible with
>>>> everything else. For instance importers that can only do DMA would
>>>> simply cleanly fail when presented with this memory.
>>> That's a rather interesting use case, but not something I consider fitting
>>> for the DMA-buf interface.
>> To recast the problem statement, it is basically the same as your
>> device private interconnects. There are certain devices that
>> understand how to use this memory, and if they work together they can
>> access it.
>>
>>> See DMA-buf in meant to be used between drivers to allow DMA access on
>>> shared buffers.
>> They are shared, just not with everyone :)
>>
>>> What you try to do here instead is to give memory in the form of a file
>>> descriptor to a client VM to do things like CPU mapping and giving it to
>>> drivers to do DMA etc...
>> How is this paragraph different from the first? It is a shared buffer
>> that we want real DMA and CPU "DMA" access to. It is "private" so
>> things that don't understand the interconnect rules cannot access it.
>
> Yeah, but it's private to the exporter. And a very fundamental rule of
> DMA-buf is that the exporter is the one in control of things.
>
> So for example it is illegal for an importer to setup CPU mappings to
> a buffer. That's why we have dma_buf_mmap() which redirects mmap()
> requests from the importer to the exporter.
>
> In your use case here the importer wants to be in control and do
> things like both CPU as well as DMA mappings.
>
> As far as I can see that is really not an use case which fits DMA-buf
> in any way.
>
>>> That sounds more something for the TEE driver instead of anything DMA-buf
>>> should be dealing with.
>> Has nothing to do with TEE.
>
> Why?
>
> Regards,
> Christian.
>
>> Jason
>
More information about the dri-devel
mailing list