[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
tfiga at chromium.org
Fri Sep 13 08:31:50 UTC 2019
On Fri, Sep 13, 2019 at 5:07 PM Gerd Hoffmann <kraxel at redhat.com> wrote:
> > > > To seamlessly enable buffer sharing with drivers using such frameworks,
> > > > make the virtio-gpu driver expose the resource handle as the DMA address
> > > > of the buffer returned from the DMA-buf mapping operation. Arguably, the
> > > > resource handle is a kind of DMA address already, as it is the buffer
> > > > identifier that the device needs to access the backing memory, which is
> > > > exactly the same role a DMA address provides for native devices.
> > First of all, thanks for taking a look at this.
> > > No. A scatter list has guest dma addresses, period. Stuffing something
> > > else into a scatterlist is asking for trouble, things will go seriously
> > > wrong when someone tries to use such a fake scatterlist as real scatterlist.
> > What is a "guest dma address"? The definition of a DMA address in the
> > Linux DMA API is an address internal to the DMA master address space.
> > For virtio, the resource handle namespace may be such an address
> > space.
> No. DMA master address space in virtual machines is pretty much the
> same it is on physical machines. So, on x86 without iommu, identical
> to (guest) physical address space. You can't re-define it like that.
That's not true. Even on x86 without iommu the DMA address space can
be different from the physical address space. That could be still just
a simple addition/subtraction by constant, but still, the two are
explicitly defined without any guarantees about a simple mapping
between one or another existing.
"A CPU cannot reference a dma_addr_t directly because there may be
translation between its physical
address space and the DMA address space."
> > However, we could as well introduce a separate DMA address
> > space if resource handles are not the right way to refer to the memory
> > from other virtio devices.
> s/other virtio devices/other devices/
> dma-bufs are for buffer sharing between devices, not limited to virtio.
> You can't re-define that in some virtio-specific way.
We don't need to limit this to virtio devices only. In fact I actually
foresee this having a use case with the emulated USB host controller,
which isn't a virtio device.
That said, I deliberately referred to virtio to keep the scope of the
problem in control. If there is a solution that could work without
such assumption, I'm more than open to discuss it, of course.
> > > Also note that "the DMA address of the buffer" is bonkers in virtio-gpu
> > > context. virtio-gpu resources are not required to be physically
> > > contigous in memory, so typically you actually need a scatter list to
> > > describe them.
> > There is no such requirement even on a bare metal system, see any
> > system that has an IOMMU, which is typical on ARM/ARM64. The DMA
> > address space must be contiguous only from the DMA master point of
> > view.
> Yes, the iommu (if present) could remap your scatterlist that way. You
> can't depend on that though.
The IOMMU doesn't need to exist physically, though. After all, guest
memory may not be physically contiguous in the host already, but with
your definition of DMA address we would refer to it as contiguous. As
per my understanding of the DMA address, anything that lets the DMA
master access the target memory would qualify and there would be no
need for an IOMMU in between.
> What is the plan here? Host-side buffer sharing I guess? So you are
> looking for some way to pass buffer handles from the guest to the host,
> even in case those buffers are not created by your driver but imported
> from somewhere else?
Exactly. The very specific first scenario that we want to start with
is allocating host memory through virtio-gpu and using that memory
both as output of a video decoder and as input (texture) to Virgil3D.
The memory needs to be specifically allocated by the host as only the
host can know the requirements for memory allocation of the video
decode accelerator hardware.
More information about the dri-devel