[PATCH 0/4] cover-letter: Allow MMIO regions to be exported through dmabuf
Christian König
christian.koenig at amd.com
Mon Dec 16 10:21:39 UTC 2024
Am 16.12.24 um 10:54 schrieb Wei Lin Guay:
> From: Wei Lin Guay <wguay at meta.com>
>
> This is another attempt to revive the patches posted by Jason
> Gunthorpe and Vivek Kasireddy, at
> https://patchwork.kernel.org/project/linux-media/cover/0-v2-472615b3877e+28f7-vfio_dma_buf_jgg@nvidia.com/
> https://lwn.net/Articles/970751/
>
> In addition to the initial proposal by Jason, another promising
> application is exposing memory from an AI accelerator (bound to VFIO)
> to an RDMA device. This would allow the RDMA device to directly access
> the accelerator's memory, thereby facilitating direct data
> transactions between the RDMA device and the accelerator.
>
> Below is from the text/motivation from the orginal cover letter.
>
> dma-buf has become a way to safely acquire a handle to non-struct page
> memory that can still have lifetime controlled by the exporter. Notably
> RDMA can now import dma-buf FDs and build them into MRs which allows for
> PCI P2P operations. Extend this to allow vfio-pci to export MMIO memory
> from PCI device BARs.
>
> This series supports a use case for SPDK where a NVMe device will be owned
> by SPDK through VFIO but interacting with a RDMA device. The RDMA device
> may directly access the NVMe CMB or directly manipulate the NVMe device's
> doorbell using PCI P2P.
>
> However, as a general mechanism, it can support many other scenarios with
> VFIO. I imagine this dmabuf approach to be usable by iommufd as well for
> generic and safe P2P mappings.
>
> This series goes after the "Break up ioctl dispatch functions to one
> function per ioctl" series.
Yeah that sounds like it should work.
But where is the rest of the series, I only see the cover letter?
>
> v2:
> - Name the new file dma_buf.c
> - Restore orig_nents before freeing
> - Fix reversed logic around priv->revoked
> - Set priv->index
> - Rebased on v2 "Break up ioctl dispatch functions"
> v1: https://lore.kernel.org/r/0-v1-9e6e1739ed95+5fa-vfio_dma_buf_jgg@nvidia.com
> Cc: linux-rdma at vger.kernel.org
> Cc: Oded Gabbay <ogabbay at kernel.org>
> Cc: Christian König <christian.koenig at amd.com>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> Cc: Leon Romanovsky <leon at kernel.org>
> Cc: Maor Gottlieb <maorg at nvidia.com>
> Cc: dri-devel at lists.freedesktop.org
> Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
>
> Jason Gunthorpe (3):
> vfio: Add vfio_device_get()
> dma-buf: Add dma_buf_try_get()
That is usually a no-go. We have rejected adding dma_buf_try_get()
multiple times.
Please explain *exactly* what you need that for and how you protect
against races with tearndown.
Regards,
Christian.
> vfio/pci: Allow MMIO regions to be exported through dma-buf
>
> Wei Lin Guay (1):
> vfio/pci: Allow export dmabuf without move_notify from importer
>
> drivers/vfio/pci/Makefile | 1 +
> drivers/vfio/pci/dma_buf.c | 291 +++++++++++++++++++++++++++++
> drivers/vfio/pci/vfio_pci_config.c | 8 +-
> drivers/vfio/pci/vfio_pci_core.c | 44 ++++-
> drivers/vfio/pci/vfio_pci_priv.h | 30 +++
> drivers/vfio/vfio_main.c | 1 +
> include/linux/dma-buf.h | 13 ++
> include/linux/vfio.h | 6 +
> include/linux/vfio_pci_core.h | 1 +
> include/uapi/linux/vfio.h | 18 ++
> 10 files changed, 405 insertions(+), 8 deletions(-)
> create mode 100644 drivers/vfio/pci/dma_buf.c
>
> --
> 2.43.5
More information about the dri-devel
mailing list