[virglrenderer-devel] coherent memory access for virgl

Tomeu Vizoso tomeu.vizoso at collabora.com
Thu Mar 14 09:21:36 UTC 2019


On 3/13/19 10:33 AM, Gerd Hoffmann wrote:
> On Wed, Mar 13, 2019 at 08:40:00AM +0100, Tomeu Vizoso wrote:
>> On 3/13/19 8:34 AM, Gerd Hoffmann wrote:
>>>     Hi,
>>>
>>>>>> That'll probably work best.  Also a virtio protocol extension.
>>>>>
>>>>> Ok, then if you think this is the correct approach, I will work on
>>>>> rebasing the series below:
>>>>>
>>>>> https://lkml.org/lkml/2018/1/26/311
>>>>
>>>> Hi Gerd,
>>>>
>>>> do you have any remaining concerns about this approach?
>>>
>>> Don't remember the details.  Can you just post (or mail privately) what
>>> you have right now, so I can have a look at it?
>>
>> I haven't done any further work since I last sent
>> https://lkml.org/lkml/2018/1/26/312 . Please tell me if anything isn't clear
>> from the cover letter and patches.

Hi Gerd,

Thanks for the comments.

> Ok, looking again.
> 
> I guess we should start with just the virtio protocol header changes and
> the virtio-gpu ioctl changes.
> 
> On the virtio protocol:
> 
>   * I'd suggest to take virtio_vsock.h, then simplify the messages
>     (addressing for example) and add the fields we need to pass gem
>     buffer handles.

Sounds good, but I guess we would use resource IDs instead of GEM 
handles? That's what the existing protocol messages use (and I'm not sure 
QEMU ever knows about virtio-gpu GEM handles).

What about credit negotiation, do you think we need that in this case?

>   * Using client_fd as connection identifier isn't going to fly.  File
>     handles are not unique, each process has its own fd namespace.

Makes sense, I will generate something with idr.

>   * rx struct looks strange.  You can have protocol buffers following
>     the header for both tx and rx.  data + pfns should not be needed.

Guess all this will be replaced by vsock-style payload messages?

>   * Not sure winsrv is a great name for this.  I can imagine people
>     find other use cases for this.

Yeah, another use case I have heard of since is that of applications in 
the guest acquiring camera frames from the host via some other IPC 
mechanism (PipeWire, Mojo, etc).

I think that the missing piece that we are trying to come up with is IPC 
with zero-copy of graphic buffers, so maybe we can just call it IPC 
within the virtio-gpu namespace?

>   * On connect we probably want allow indicating the protocol we want
>     run, so messages are forwarded to the correct server/proxy on the
>     host side.

Definitely.

> On the ioctls:
> 
>   * The connect ioctl can just return the file handle.

Ack.

>   * Do we actually need a RX ioctl?  We could support read() +
>     write() on the file handle returned by connect.

Well, I guess we also want to be able to receive FDs that the host sends.

> Idea:
> 
>   * Can we hand out an socket file handle to userspace, so the sendmsg
>     syscall works?  We don't need a TX ioctl then for file descriptor
>     passing.  As far I know netlink uses sockets for kernel <-> userspace
>     communication too, so there shouldn't be fundamental roadblocks.
>     Didn't investigate that in detail though.

 From what I can see, only Unix domain sockets support FD passing:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=76dadd76

Are you proposing adding SCM_RIGHTS support to netlink sockets?

Thanks,

Tomeu






More information about the virglrenderer-devel mailing list