Remote display with 3D acceleration using Wayland/Weston

Pekka Paalanen ppaalanen at
Wed Dec 14 09:27:22 UTC 2016

On Tue, 13 Dec 2016 14:39:31 -0600
DRC <dcommander at> wrote:

> Greetings.  I am the founder and principal developer for The VirtualGL
> Project, which has (since 2004) produced a GLX interposer (VirtualGL)
> and a high-speed X proxy (TurboVNC) that are widely used for running
> Linux/Unix OpenGL applications remotely with hardware-accelerated
> server-side 3D rendering.  For those who aren't familiar with VirtualGL,
> it basically works by:


could you be more specific on what you mean by "server-side", please?
Are you referring to the machine where the X server runs, or the
machine that is remote from a user perspective where the app runs?

My confusion is caused by the difference in the X11 vs. Wayland models.
The display server the app connects to is not on the same side in one
model as in the other model.

With X11 (traditional indirect rendering with X11 over network):

Machine A                |                 Machine B
App -> libs --------(X11, GLX)--------> X server -> display
                         |                       -> GPU B

With Wayland apps remoted:

Machine A                         |           Machine B
App                               |
  -> EGL and GL libs -> GPU A     |
  --(wayland)--> Weston ------(VNC/RDP)-------> VNC/RDP viewer -> window system -> display

Wayland apps handle all rendering themselves, there is nothing for
sending rendering commands to another process like the Wayland

What a Wayland compositor needs to do is to advertise support for EGL
Wayland platform for clients. That it does by using the
EGL_WL_bind_wayland_display extension.

If you want all GL rendering to happen in the machine where the app
runs, then you don't have to do much anything, it already works like
that. You only need to make sure the compositor initializes EGL, which
in Weston's case means using the gl-renderer. The renderer does not
have to actually composite anything if you want to remote windows
separately, but it is needed to gain access to the window contents. In
Weston, only the renderer knows how to access the contents of all
windows (wl_surfaces).

If OTOH you want to send GL rendering commands to the other machine
than where the app is running, that will require a great deal of work,
since you have to implement serialization and de-serialization of
OpenGL (and EGL) yourself. (It has been done before, do ask me if you
want details.)

> -- Interposing (via LD_PRELOAD) GLX calls from the OpenGL application
> -- Rewriting the GLX calls such that OpenGL contexts are created in
> Pbuffers instead of windows
> -- Redirecting the GLX calls to the server's local display (usually :0,
> which presumably has a GPU attached) rather than the remote display or
> the X proxy
> -- Reading back the rendered 3D images from the server's local display
> and transferring them to the remote display or X proxy when the
> application swaps buffers or performs other "triggers" (such as calling
> glFinish() when rendering to the front buffer)
> There is more complexity to it than that, but that's at least the
> general idea.

Ok, so that sounds like you want the GL execution to happen in the
app-side machine. That's the easy case. :-)

> At the moment, I'm investigating how best to accomplish a similar feat
> in a Wayland/Weston environment.  I'm given to understand that building
> a VNC server on top of Weston is straightforward and has already been
> done as a proof of concept, so really my main question is how to do the
> OpenGL stuff.  At the moment, my (very limited) understanding of the
> architecture seems to suggest that I have two options:

Weston has the RDP backend already, indeed.

> (1) Implement an interposer similar in concept to VirtualGL, except that
> this interposer would rewrite EGL calls to redirect them from the
> Wayland display to a low-level EGL device that supports off-screen
> rendering (such as the devices provided through the
> EGL_PLATFORM_DEVICE_EXT extension, which is currently supported by
> nVidia's drivers.)  How to get the images from that low-level device
> into the Weston compositor when it is using a remote display back-end is
> an open question, but I assume I'd have to ask the compositor for a
> surface (which presumably would be allocated from main memory) and
> handle the transfer of the pixels from the GPU to that surface.  That is
> similar in concept to how VirtualGL currently works, vis-a-vis using
> glReadPixels to transfer the rendered OpenGL pixels into an MIT-SHM image.

I think you have an underlying assumption that EGL and GL would somehow
automatically be carried over the network, and you need to undo it.
That does not happen, as the display server always runs in the same
machine as the application. The Wayland display is always local, it can
never be remote simply because Wayland can never go over a network.

Furthermore, all GL rendering is always local to the application
process. The application always uses the local GPU "directly", there is
no provision to redirect rendering commands to another process with the
normal EGL and OpenGL libraries. All the Wayland compositor can do is
tell which local GPU device to use.

The problem of "get the images from that low-level device into the
Weston compositor" is already solved, because it is a fundamental part
of the normal operation of any Wayland stack. The real question is what
you will do with the images in the Wayland compositor that has the
remoting backend. The backend already has as direct access as possible
to the buffers a Wayland client is rendering.

> (2) Figure out some way of redirecting the OpenGL rendering within
> Weston itself, rather than using an interposer.  This is where I'm fuzzy
> on the details.  Is this even possible with a remote display back-end?
> Maybe it's as straightforward as writing a back-end that allows Weston
> to use the aforementioned low-level EGL device to obtain all of the
> rendering surfaces that it passes to applications, but I don't have a
> good enough understanding of the architecture to know whether or not
> that idea is nonsense.  I know that X proxies, such as Xvnc, allocate a
> "virtual framebuffer" that is used by the code for performing X11
> rendering.  Because this virtual framebuffer is located in main memory,
> you can't do hardware-accelerated OpenGL with it unless you use a
> solution like VirtualGL.  It would be impractical to allocate the X
> proxy's virtual framebuffer in GPU memory because of the fine-grained
> nature of X11, but since Wayland is all image-based, perhaps that is no
> longer a limitation.

Weston does not execute any OpenGL commands on behalf of apps, so no
problem. :-)

How it works is that (and this is all hidden inside and you
will not find any code for it apps, toolkits, Wayland, or compositors)
the Wayland compositor tells the client-side which GPU to use (if even
necessary), and the client-side (usually by itself) allocates the
necessary buffers, uses GPU to render into them, and sends buffer
handles to the compositor when the app calls eglSwapBuffers().

> Any advice is greatly appreciated.  Thanks for your time.

Christian mentioned Waltham, but Waltham does no good if you are
already going to use VNC protocol or RDP or any other existing
protocol. Waltham is only an IPC library: a function call here will
cause a function to be called there.

Waltham will be the control channel for this design:

There the plan is to render on the app-side and send complete frames
over the network per window.

In summary, if you want to keep all GL execution in the same machine as
the application, you don't really have to do anything with Wayland, and
you do not have to mess with EGL or GL libraries. What you want already
happens anyway. The part you have to care about is the Wayland
compositor with the remoting backend. Weston core implements everything
of Wayland protocols, and you need to just take the images and send
them out any way you want, and receive input any way you want and feed
it into Weston core.

Well, that's the theory. It's pretty easy if you remote a full desktop
and let the app-side machine run the whole desktop. OTOH, if you want
to remote individual windows, you also need to translate and remote
what we call "the shell protocol extensions", i.e. everything that is
related to window management. That might be complicated, depending on
your display-side machine's window system.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <>

More information about the wayland-devel mailing list