[PATCH v2 4/4] vfio/pci: Allow MMIO regions to be exported through dma-buf
Jason Gunthorpe
jgg at nvidia.com
Tue Sep 6 11:48:28 UTC 2022
On Tue, Sep 06, 2022 at 12:38:44PM +0200, Christian König wrote:
> Am 06.09.22 um 11:51 schrieb Christoph Hellwig:
> > > +{
> > > + struct vfio_pci_dma_buf *priv = dmabuf->priv;
> > > + int rc;
> > > +
> > > + rc = pci_p2pdma_distance_many(priv->vdev->pdev, &attachment->dev, 1,
> > > + true);
> > This should just use pci_p2pdma_distance.
OK
> > > + /*
> > > + * Since the memory being mapped is a device memory it could never be in
> > > + * CPU caches.
> > > + */
> > DMA_ATTR_SKIP_CPU_SYNC doesn't even apply to dma_map_resource, not sure
> > where this wisdom comes from.
Habana driver
> > > + dma_addr = dma_map_resource(
> > > + attachment->dev,
> > > + pci_resource_start(priv->vdev->pdev, priv->index) +
> > > + priv->offset,
> > > + priv->dmabuf->size, dir, DMA_ATTR_SKIP_CPU_SYNC);
> > This is not how P2P addresses are mapped. You need to use
> > dma_map_sgtable and have the proper pgmap for it.
>
> The problem is once more that this is MMIO space, in other words register
> BARs which needs to be exported/imported.
>
> Adding struct pages for it generally sounds like the wrong approach here.
> You can't even access this with the CPU or would trigger potentially
> unwanted hardware actions.
Right, this whole thing is the "standard" that dmabuf has adopted
instead of the struct pages. Once the AMD GPU driver started doing
this some time ago other drivers followed.
Now we have struct pages, almost, but I'm not sure if their limits are
compatible with VFIO? This has to work for small bars as well.
Jason
More information about the dri-devel
mailing list