[RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked() kAPI
Simona Vetter
simona.vetter at ffwll.ch
Mon Jan 20 18:50:23 UTC 2025
On Mon, Jan 20, 2025 at 01:59:01PM -0400, Jason Gunthorpe wrote:
> On Mon, Jan 20, 2025 at 01:14:12PM +0100, Christian König wrote:
> What is going wrong with your email? You replied to Simona, but
> Simona Vetter <simona.vetter at ffwll.ch> is dropped from the To/CC
> list??? I added the address back, but seems like a weird thing to
> happen.
Might also be funny mailing list stuff, depending how you get these. I
read mails over lore and pretty much ignore cc (unless it's not also on
any list, since those tend to be security issues) because I get cc'ed on
way too much stuff for that to be a useful signal.
> > Please take another look at what is proposed here. The function is called
> > dma_buf_get_pfn_*unlocked* !
>
> I don't think Simona and I are defending the implementation in this
> series. This series needs work.
Yeah this current series is good for kicking off the discussions, it's
defo not close to anything we can merge.
> We have been talking about what the implementation should be. I think
> we've all been clear on the idea that the DMA buf locking rules should
> apply to any description of the memory, regardless of if the address
> are CPU, DMA, or private.
>
> I agree that the idea of any "get unlocked" concept seems nonsensical
> and wrong within dmabuf.
>
> > Inserting PFNs into CPU (or probably also IOMMU) page tables have a
> > different semantics than what DMA-buf usually does, because as soon as the
> > address is written into the page table it is made public.
>
> Not really.
>
> The KVM/CPU is fully compatible with move semantics, it has
> restartable page faults and can implement dmabuf's move locking
> scheme. It can use the resv lock, the fences, move_notify and so on to
> implement it. It is a bug if this series isn't doing that.
Yeah I'm not worried about cpu mmap locking semantics. drm/ttm is a pretty
clear example that you can implement dma-buf mmap with the rules we have,
except the unmap_mapping_range might need a bit fudging with a separate
address_space.
For cpu mmaps I'm more worried about the arch bits in the pte, stuff like
caching mode or encrypted memory bits and things like that. There's
vma->vm_pgprot, but it's a mess. But maybe this all is an incentive to
clean up that mess a bit.
> The iommu cannot support move semantics. It would need the existing
> pin semantics (ie we call dma_buf_pin() and don't support
> move_notify). To work with VFIO we would need to formalize the revoke
> semantics that Simona was discussing.
I thought iommuv2 (or whatever linux calls these) has full fault support
and could support current move semantics. But yeah for iommu without
fault support we need some kind of pin or a newly formalized revoke model.
> We already implement both of these modalities in rdma, the locking API
> is fine and workable with CPU pfns just as well.
>
> I've imagined a staged flow here:
>
> 1) The new DMA API lands
> 2) We improve the new DMA API to be fully struct page free, including
> setting up P2P
> 3) VFIO provides a dmbuf exporter using the new DMA API's P2P
> support. We'd have to continue with the scatterlist hacks for now.
> VFIO would be a move_notify exporter. This should work with RDMA
> 4) RDMA works to drop scatterlist from its internal flows and use the
> new DMA API instead.
> 5) VFIO/RDMA implement a new non-scatterlist DMABUF op to
> demonstrate the post-scatterlist world and deprecate the scatterlist
> hacks.
> 6) We add revoke semantics to dmabuf, and VFIO/RDMA implements them
> 7) iommufd can import a pinnable revokable dmabuf using CPU pfns
> through the non-scatterlist op.
> 8) Relevant GPU drivers implement the non-scatterlist op and RDMA
> removes support for the deprecated scatterlist hacks.
Sounds pretty reasonable as a first sketch of a proper plan. Of course
fully expecting that no plan ever survives implementation intact :-)
Cheers, Sima
>
> Xu's series has jumped ahead a bit and is missing infrastructure to
> build it properly.
>
> Jason
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list