[PATCH 0/4] cover-letter: Allow MMIO regions to be exported through dmabuf
Kasireddy, Vivek
vivek.kasireddy at intel.com
Mon Dec 16 17:34:50 UTC 2024
Hi Wei Lin,
> Subject: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
>
> From: Wei Lin Guay <wguay at meta.com>
>
> This is another attempt to revive the patches posted by Jason
> Gunthorpe and Vivek Kasireddy, at
> https://patchwork.kernel.org/project/linux-media/cover/0-v2-
> 472615b3877e+28f7-vfio_dma_buf_jgg at nvidia.com/
> https://lwn.net/Articles/970751/
v2: https://lore.kernel.org/dri-devel/20240624065552.1572580-1-vivek.kasireddy@intel.com/
addresses review comments from Alex and Jason and also includes the ability
to create the dmabuf from multiple ranges. This is really needed to future-proof
the feature.
Also, my understanding is that this patchset cannot proceed until Leon's series is merged:
https://lore.kernel.org/kvm/cover.1733398913.git.leon@kernel.org/
Thanks,
Vivek
>
> In addition to the initial proposal by Jason, another promising
> application is exposing memory from an AI accelerator (bound to VFIO)
> to an RDMA device. This would allow the RDMA device to directly access
> the accelerator's memory, thereby facilitating direct data
> transactions between the RDMA device and the accelerator.
>
> Below is from the text/motivation from the orginal cover letter.
>
> dma-buf has become a way to safely acquire a handle to non-struct page
> memory that can still have lifetime controlled by the exporter. Notably
> RDMA can now import dma-buf FDs and build them into MRs which allows
> for
> PCI P2P operations. Extend this to allow vfio-pci to export MMIO memory
> from PCI device BARs.
>
> This series supports a use case for SPDK where a NVMe device will be owned
> by SPDK through VFIO but interacting with a RDMA device. The RDMA device
> may directly access the NVMe CMB or directly manipulate the NVMe device's
> doorbell using PCI P2P.
>
> However, as a general mechanism, it can support many other scenarios with
> VFIO. I imagine this dmabuf approach to be usable by iommufd as well for
> generic and safe P2P mappings.
>
> This series goes after the "Break up ioctl dispatch functions to one
> function per ioctl" series.
>
> v2:
> - Name the new file dma_buf.c
> - Restore orig_nents before freeing
> - Fix reversed logic around priv->revoked
> - Set priv->index
> - Rebased on v2 "Break up ioctl dispatch functions"
> v1: https://lore.kernel.org/r/0-v1-9e6e1739ed95+5fa-
> vfio_dma_buf_jgg at nvidia.com
> Cc: linux-rdma at vger.kernel.org
> Cc: Oded Gabbay <ogabbay at kernel.org>
> Cc: Christian König <christian.koenig at amd.com>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> Cc: Leon Romanovsky <leon at kernel.org>
> Cc: Maor Gottlieb <maorg at nvidia.com>
> Cc: dri-devel at lists.freedesktop.org
> Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
>
> Jason Gunthorpe (3):
> vfio: Add vfio_device_get()
> dma-buf: Add dma_buf_try_get()
> vfio/pci: Allow MMIO regions to be exported through dma-buf
>
> Wei Lin Guay (1):
> vfio/pci: Allow export dmabuf without move_notify from importer
>
> drivers/vfio/pci/Makefile | 1 +
> drivers/vfio/pci/dma_buf.c | 291 +++++++++++++++++++++++++++++
> drivers/vfio/pci/vfio_pci_config.c | 8 +-
> drivers/vfio/pci/vfio_pci_core.c | 44 ++++-
> drivers/vfio/pci/vfio_pci_priv.h | 30 +++
> drivers/vfio/vfio_main.c | 1 +
> include/linux/dma-buf.h | 13 ++
> include/linux/vfio.h | 6 +
> include/linux/vfio_pci_core.h | 1 +
> include/uapi/linux/vfio.h | 18 ++
> 10 files changed, 405 insertions(+), 8 deletions(-)
> create mode 100644 drivers/vfio/pci/dma_buf.c
>
> --
> 2.43.5
More information about the dri-devel
mailing list