[PATCH 0/4] cover-letter: Allow MMIO regions to be exported through dmabuf
Kasireddy, Vivek
vivek.kasireddy at intel.com
Wed Feb 26 07:55:07 UTC 2025
Hi Wei Lin,
[...]
>
> Yeah, the mmap handler is really needed as a debugging tool given
> that the
> importer would not be able to provide access to the dmabuf's
> underlying
> memory via the CPU in any other way.
>
>
>
> - Rather than handle different regions within a single dma-buf,
> would vfio-
> user open multiple distinct file descriptors work?
> For our specific use case, we don't require multiple regions and
> prefer
> Jason’s original patch.
>
>
> Restricting the dmabuf to a single region (or having to create multiple
> dmabufs
> to represent multiple regions/ranges associated with a single scattered
> buffer)
> would not be feasible or ideal in all cases. For instance, in my use-case,
> I am
> sharing a large framebuffer (FB) located in GPU's VRAM. And,
> allocating a large
> FB contiguously (nr_ranges = 1) in VRAM is not possible when there is
> memory
> pressure.
>
> Furthermore, since we are adding a new UAPI with this patch/feature,
> we cannot
> go back and tweak it (to add support for nr_ranges > 1) should there be
> a need in
> the future, but you can always use nr_ranges = 1 anytime. That is why
> it makes
> sense to be flexible in terms of the number of ranges/regions.
>
>
>
>
>
> Also, my understanding is that this patchset cannot
> proceed until Leon's
>
>
> series is merged:
>
>
>
> https://lore.kernel.org/kvm/cover.1733398913.git.leon@kernel.org/
>
>
>
> Thanks for the pointer.
> I will rebase my local patch series on top of that.
>
>
> AFAIK, Leon's work includes new mechanism/APIs to do P2P DMA,
> which we
> should be using in this patchset. And, I think he is also planning to use
> the
> new APIs to augment dmabuf usage and not be forced to use the
> scatter-gather
> list particularly in cases where the underlying memory is not backed by
> struct page.
>
> I was just waiting for all of this to happen, before posting a v3.
>
>
>
> Is there any update or ETA for the v3? Are there any ways we can help?
I believe Leon's series is very close to getting merged. Once it lands, this series can
be revived.
>
> Additionally, do you have any repo that we can access to begin validating our
> user API changes. This would greatly aid us in our software development.
Sure, here is the branch associated with this series (v2):
https://gitlab.freedesktop.org/Vivek/drm-tip/-/commits/vfio_dmabuf_2
Note that, the above branch is based off of Kernel 6.10 but I think it shouldn't be
too hard to forklift the patches onto 6.14. Also, here is the Qemu branch that
includes patches that demonstrate and make use of this new feature:
https://gitlab.freedesktop.org/Vivek/qemu/-/commits/vfio_dmabuf_2
On a different note, if it is not too much trouble, could you please reply to emails
in text (preferred format for emails on mailing lists) instead of HTML?
Thanks,
Vivek
>
> Thanks,
> Wei Lin
>
>
> Thanks,
> Vivek
>
>
>
>
> Thanks,
> Wei Lin
>
>
>
>
>
> Thanks,
> Vivek
>
>
>
>
> In addition to the initial proposal by Jason,
> another promising
> application is exposing memory from an AI
> accelerator (bound to VFIO)
> to an RDMA device. This would allow the RDMA
> device to directly access
> the accelerator's memory, thereby facilitating
> direct data
> transactions between the RDMA device and the
> accelerator.
>
> Below is from the text/motivation from the
> orginal cover letter.
>
> dma-buf has become a way to safely acquire a
> handle to non-struct page
> memory that can still have lifetime controlled by
> the exporter. Notably
> RDMA can now import dma-buf FDs and build
> them into MRs which
>
>
> allows
>
>
> for
> PCI P2P operations. Extend this to allow vfio-pci
> to export MMIO memory
> from PCI device BARs.
>
> This series supports a use case for SPDK where a
> NVMe device will be
>
>
> owned
>
>
> by SPDK through VFIO but interacting with a
> RDMA device. The RDMA
>
>
> device
>
>
> may directly access the NVMe CMB or directly
> manipulate the NVMe
>
>
> device's
>
>
> doorbell using PCI P2P.
>
> However, as a general mechanism, it can support
> many other scenarios
>
>
> with
>
>
> VFIO. I imagine this dmabuf approach to be
> usable by iommufd as well for
> generic and safe P2P mappings.
>
> This series goes after the "Break up ioctl dispatch
> functions to one
> function per ioctl" series.
>
> v2:
> - Name the new file dma_buf.c
> - Restore orig_nents before freeing
> - Fix reversed logic around priv->revoked
> - Set priv->index
> - Rebased on v2 "Break up ioctl dispatch
> functions"
> v1: https://lore.kernel.org/r/0-v1-
> 9e6e1739ed95+5fa-
> vfio_dma_buf_jgg at nvidia.com
> Cc: linux-rdma at vger.kernel.org
> Cc: Oded Gabbay <ogabbay at kernel.org>
> Cc: Christian König
> <christian.koenig at amd.com>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> Cc: Leon Romanovsky <leon at kernel.org>
> Cc: Maor Gottlieb <maorg at nvidia.com>
> Cc: dri-devel at lists.freedesktop.org
> Signed-off-by: Jason Gunthorpe
> <jgg at nvidia.com>
>
> Jason Gunthorpe (3):
> vfio: Add vfio_device_get()
> dma-buf: Add dma_buf_try_get()
> vfio/pci: Allow MMIO regions to be exported
> through dma-buf
>
> Wei Lin Guay (1):
> vfio/pci: Allow export dmabuf without
> move_notify from importer
>
> drivers/vfio/pci/Makefile | 1 +
> drivers/vfio/pci/dma_buf.c | 291
> +++++++++++++++++++++++++++++
> drivers/vfio/pci/vfio_pci_config.c | 8 +-
> drivers/vfio/pci/vfio_pci_core.c | 44 ++++-
> drivers/vfio/pci/vfio_pci_priv.h | 30 +++
> drivers/vfio/vfio_main.c | 1 +
> include/linux/dma-buf.h | 13 ++
> include/linux/vfio.h | 6 +
> include/linux/vfio_pci_core.h | 1 +
> include/uapi/linux/vfio.h | 18 ++
> 10 files changed, 405 insertions(+), 8 deletions(-)
> create mode 100644 drivers/vfio/pci/dma_buf.c
>
> --
> 2.43.5
>
More information about the dri-devel
mailing list