[LSF/MM/BPF proposal]: Physr discussion

Jason Gunthorpe jgg at nvidia.com
Mon Jan 23 13:44:38 UTC 2023


On Mon, Jan 23, 2023 at 04:36:25AM +0000, Matthew Wilcox wrote:

> > I've been working on an implementation and hope to have something
> > draft to show on the lists in a few weeks. It is pretty clear there
> > are several interesting decisions to make that I think will benefit
> > from a live discussion.
> 
> Cool!  Here's my latest noodlings:
> https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/phyr
> 
> Just the top two commits; the other stuff is unrelated.  Shakeel has
> also been interested in this.

I've gone from quite a different starting point - I've been working
DMA API upwards, so what does the dma_map_XX look like, what APIs do
we need to support the dma_map_ops implementations to iterate/etc, how
do we form and return the dma mapped list, how does P2P, with all the
checks, actually work, etc. These help inform what we want from the
"phyr" as an API.

The DMA API is the fundamental reason why everything has to use
scatterlist - it is the only way to efficiently DMA map anything more
than a few pages. If we can't solve that then everything else is
useless, IMHO.

If we have an agreement on DMA API then things like converting RDMA to
use it and adding it to DMABUF are comparatively straightforward.

There are 24 implementations of dma_map_ops, so my approach is to try
to build a non-leaky 'phyr' API that doesn't actually care how the
physical ranges are stored, separates CPU and DMA and then use that to
get all 24 implementations.

With a good API we can fiddle with the exact nature of the phyr as we
like.

I've also been exploring the idea that with a non-leaking API we don't
actually need to settle on one phyr to rule them all. bio_vec can stay
as is, but become directly dma mappable, rdma/drm can use something
better suited to the page list use cases (eg 8 bytes/entry not 16),
and a non-leaking API can multiplex these different memory layouts and
allow one dma_map_ops implementation to work on both.

Thanks,
Jason


More information about the dri-devel mailing list