[Linaro-mm-sig] [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

Christian König christian.koenig at amd.com
Tue Jun 22 15:48:10 UTC 2021


Am 22.06.21 um 17:40 schrieb Jason Gunthorpe:
> On Tue, Jun 22, 2021 at 05:29:01PM +0200, Christian König wrote:
>> [SNIP]
>> No absolutely not. NVidia GPUs work exactly the same way.
>>
>> And you have tons of similar cases in embedded and SoC systems where
>> intermediate memory between devices isn't directly addressable with the CPU.
> None of that is PCI P2P.
>
> It is all some specialty direct transfer.
>
> You can't reasonably call dma_map_resource() on non CPU mapped memory
> for instance, what address would you pass?
>
> Do not confuse "I am doing transfers between two HW blocks" with PCI
> Peer to Peer DMA transfers - the latter is a very narrow subcase.
>
>> No, just using the dma_map_resource() interface.
> Ik, but yes that does "work". Logan's series is better.

No it isn't. It makes devices depend on allocating struct pages for 
their BARs which is not necessary nor desired.

How do you prevent direct I/O on those pages for example?

Allocating a struct pages has their use case, for example for exposing 
VRAM as memory for HMM. But that is something very specific and should 
not limit PCIe P2P DMA in general.

>> [SNIP]
>> Well that is certainly not true. I'm just not sure if that works with all
>> IOMMU drivers thought.
> Huh? All the iommu interfaces except for the dma_map_resource() are
> struct page based. dma_map_resource() is slow ad limited in what it
> can do.

Yeah, but that is exactly the functionality we need. And as far as I can 
see that is also what Oded wants here.

Mapping stuff into userspace and then doing direct DMA to it is only a 
very limited use case and we need to be more flexible here.

Christian.

>
> Jason



More information about the dri-devel mailing list