[virglrenderer-devel] coherent memory access for virgl

Gerd Hoffmann kraxel at redhat.com
Thu Mar 14 11:02:53 UTC 2019


> Hi Gerd,
> 
> Thanks for the comments.
> 
> > Ok, looking again.
> > 
> > I guess we should start with just the virtio protocol header changes and
> > the virtio-gpu ioctl changes.
> > 
> > On the virtio protocol:
> > 
> >   * I'd suggest to take virtio_vsock.h, then simplify the messages
> >     (addressing for example) and add the fields we need to pass gem
> >     buffer handles.
> 
> Sounds good, but I guess we would use resource IDs instead of GEM handles?
> That's what the existing protocol messages use (and I'm not sure QEMU ever
> knows about virtio-gpu GEM handles).

Yes, sure, we must map the gem handle to resource ids before sending
them to the host.

> What about credit negotiation, do you think we need that in this case?

Well, yes, I think we need some kind of flow control.  The sender needs
to know how much buffer space the receiver has, so it will not be
flooded it with requests it can't handle.  Just using the virtqueue for
that has the drawback that with multiple connections sharing one
virtqueue one stalled connection will prevent all other connections
from sending data.

> >   * rx struct looks strange.  You can have protocol buffers following
> >     the header for both tx and rx.  data + pfns should not be needed.
> 
> Guess all this will be replaced by vsock-style payload messages?

Yes, I think we should do that.

> >   * Not sure winsrv is a great name for this.  I can imagine people
> >     find other use cases for this.
> 
> Yeah, another use case I have heard of since is that of applications in the
> guest acquiring camera frames from the host via some other IPC mechanism
> (PipeWire, Mojo, etc).
> 
> I think that the missing piece that we are trying to come up with is IPC
> with zero-copy of graphic buffers, so maybe we can just call it IPC within
> the virtio-gpu namespace?

Hmm, ipc is "inter-process", which I find confusing in that context too.
Maybe simply stream"?

> >   * Do we actually need a RX ioctl?  We could support read() +
> >     write() on the file handle returned by connect.
> 
> Well, I guess we also want to be able to receive FDs that the host sends.

Hmm.  That'll be tricky.  It's the guest which creates resources by
design.  Easiest for host -> guest data xfer would be the guest still
creates the resource, but the host will fill it with data (from the
camera for example).

> > Idea:
> > 
> >   * Can we hand out an socket file handle to userspace, so the sendmsg
> >     syscall works?  We don't need a TX ioctl then for file descriptor
> >     passing.  As far I know netlink uses sockets for kernel <-> userspace
> >     communication too, so there shouldn't be fundamental roadblocks.
> >     Didn't investigate that in detail though.
> 
> From what I can see, only Unix domain sockets support FD passing:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=76dadd76
> 
> Are you proposing adding SCM_RIGHTS support to netlink sockets?

I'm mentioning netlink sockets because userspace has one end and kernel
space the other.  Maybe this is possible with unix sockets too.

Failing that allowing SCM_RIGHTS on netlink sockets would be another
way to handle it.

The advantage I see is that it might be easier to add support to
existing userspace code.

But maybe it is easier with ioctls after all.

cheers,
  Gerd



More information about the virglrenderer-devel mailing list