Remote display with 3D acceleration using Wayland/Weston
DRC
dcommander at users.sourceforge.net
Tue Dec 13 20:39:31 UTC 2016
Greetings. I am the founder and principal developer for The VirtualGL
Project, which has (since 2004) produced a GLX interposer (VirtualGL)
and a high-speed X proxy (TurboVNC) that are widely used for running
Linux/Unix OpenGL applications remotely with hardware-accelerated
server-side 3D rendering. For those who aren't familiar with VirtualGL,
it basically works by:
-- Interposing (via LD_PRELOAD) GLX calls from the OpenGL application
-- Rewriting the GLX calls such that OpenGL contexts are created in
Pbuffers instead of windows
-- Redirecting the GLX calls to the server's local display (usually :0,
which presumably has a GPU attached) rather than the remote display or
the X proxy
-- Reading back the rendered 3D images from the server's local display
and transferring them to the remote display or X proxy when the
application swaps buffers or performs other "triggers" (such as calling
glFinish() when rendering to the front buffer)
There is more complexity to it than that, but that's at least the
general idea.
At the moment, I'm investigating how best to accomplish a similar feat
in a Wayland/Weston environment. I'm given to understand that building
a VNC server on top of Weston is straightforward and has already been
done as a proof of concept, so really my main question is how to do the
OpenGL stuff. At the moment, my (very limited) understanding of the
architecture seems to suggest that I have two options:
(1) Implement an interposer similar in concept to VirtualGL, except that
this interposer would rewrite EGL calls to redirect them from the
Wayland display to a low-level EGL device that supports off-screen
rendering (such as the devices provided through the
EGL_PLATFORM_DEVICE_EXT extension, which is currently supported by
nVidia's drivers.) How to get the images from that low-level device
into the Weston compositor when it is using a remote display back-end is
an open question, but I assume I'd have to ask the compositor for a
surface (which presumably would be allocated from main memory) and
handle the transfer of the pixels from the GPU to that surface. That is
similar in concept to how VirtualGL currently works, vis-a-vis using
glReadPixels to transfer the rendered OpenGL pixels into an MIT-SHM image.
(2) Figure out some way of redirecting the OpenGL rendering within
Weston itself, rather than using an interposer. This is where I'm fuzzy
on the details. Is this even possible with a remote display back-end?
Maybe it's as straightforward as writing a back-end that allows Weston
to use the aforementioned low-level EGL device to obtain all of the
rendering surfaces that it passes to applications, but I don't have a
good enough understanding of the architecture to know whether or not
that idea is nonsense. I know that X proxies, such as Xvnc, allocate a
"virtual framebuffer" that is used by the X.org code for performing X11
rendering. Because this virtual framebuffer is located in main memory,
you can't do hardware-accelerated OpenGL with it unless you use a
solution like VirtualGL. It would be impractical to allocate the X
proxy's virtual framebuffer in GPU memory because of the fine-grained
nature of X11, but since Wayland is all image-based, perhaps that is no
longer a limitation.
Any advice is greatly appreciated. Thanks for your time.
DRC
More information about the wayland-devel
mailing list