Remote display with 3D acceleration using Wayland/Weston

DRC dcommander at users.sourceforge.net
Wed Dec 14 17:42:54 UTC 2016


On 12/14/16 3:27 AM, Pekka Paalanen wrote:
> could you be more specific on what you mean by "server-side", please?
> Are you referring to the machine where the X server runs, or the
> machine that is remote from a user perspective where the app runs?

Few people use remote X anymore in my industry, so the reality of most
VirtualGL deployments (and all of the commercial VGL deployments of
which I'm aware) is that the X servers and the GPU are all on the
application host, the machine where the applications are actually
executed.  Typically people allocate beefy server hardware with multiple
GPUs, hundreds of gigabytes of memory, and as many as 32-64 CPU cores to
act as VirtualGL servers for 50 or 100 users.  We use the terms "3D X
server" and "2D X server" to indicate where the 3D and 2D rendering is
actually occurring.  The 3D X server is located on the application host
and is usually headless, since it only needs to be used by VirtualGL for
obtaining Pbuffer contexts from the GPU-accelerated OpenGL
implementation (usually nVidia or AMD/ATI.)  There is typically one 3D X
server shared by all users of the machine (VirtualGL allows this
sharing, since it rewrites all of the GLX calls from applications and
automatically converts all of them for off-screen rendering), and the 3D
X server has a separate screen for each GPU.  The 2D X server is usually
an X proxy such as TurboVNC, and there are multiple instances of it (one
or more per user.)  These 2D X server instances are usually located on
the application host but don't necessarily have to be.  The client
machine simply runs a VNC viewer.

X proxies such as Xvnc do not support hardware-accelerated OpenGL,
because they are implemented on top of a virtual framebuffer stored in
main memory.  The only way to implement hardware-accelerated OpenGL in
that environment is to use "split rendering", which is what VirtualGL
does.  It splits off the 3D rendering to another X server that has a GPU
attached.


> Wayland apps handle all rendering themselves, there is nothing for
> sending rendering commands to another process like the Wayland
> compositor.
> 
> What a Wayland compositor needs to do is to advertise support for EGL
> Wayland platform for clients. That it does by using the
> EGL_WL_bind_wayland_display extension.
> 
> If you want all GL rendering to happen in the machine where the app
> runs, then you don't have to do much anything, it already works like
> that. You only need to make sure the compositor initializes EGL, which
> in Weston's case means using the gl-renderer. The renderer does not
> have to actually composite anything if you want to remote windows
> separately, but it is needed to gain access to the window contents. In
> Weston, only the renderer knows how to access the contents of all
> windows (wl_surfaces).
> 
> If OTOH you want to send GL rendering commands to the other machine
> than where the app is running, that will require a great deal of work,
> since you have to implement serialization and de-serialization of
> OpenGL (and EGL) yourself. (It has been done before, do ask me if you
> want details.)

But if you run OpenGL applications in Weston, as it is currently
implemented, then the OpenGL applications are either GPU-accelerated or
not, depending on the back end used.  If you run Weston nested in a
Wayland compositor that is already GPU-accelerated, then OpenGL
applications run in the Weston session will be GPU-accelerated as well.
If you run Weston with the RDP back end, then OpenGL applications run in
the Weston session will use Mesa llvmpipe instead.  I'm trying to
understand, quite simply, whether it's possible for unmodified Wayland
OpenGL applications-- such as the example OpenGL applications in the
Weston source-- to take advantage of OpenGL GPU acceleration when they
are running with the RDP back end.  (I'm assuming that whatever
restrictions there are on the RDP back end would exist for the TurboVNC
back end I intend to develop.)  My testing thus far indicates that this
is not currently possible, but I need to understand the source of the
limitation so I can understand how to work around it.  Instead, you seem
to be telling me that the limitation doesn't exist, but I can assure you
that it does.  Please test Weston with the RDP back end and confirm that
OpenGL applications run in that environment are not GPU-accelerated.


> I think you have an underlying assumption that EGL and GL would somehow
> automatically be carried over the network, and you need to undo it.
> That does not happen, as the display server always runs in the same
> machine as the application. The Wayland display is always local, it can
> never be remote simply because Wayland can never go over a network.

No I don't have that assumption at all, because that does not currently
occur with VirtualGL.  VirtualGL is designed precisely to avoid that
situation.  The problem is quite simply:  In Weston, as it is currently
implemented, OpenGL applications are not GPU-accelerated when using the
RDP back end.  I'm trying to figure out if it is possible to make them
GPU-accelerated when using the RDP back end.


> Furthermore, all GL rendering is always local to the application
> process. The application always uses the local GPU "directly", there is
> no provision to redirect rendering commands to another process with the
> normal EGL and OpenGL libraries. All the Wayland compositor can do is
> tell which local GPU device to use.
> 
> The problem of "get the images from that low-level device into the
> Weston compositor" is already solved, because it is a fundamental part
> of the normal operation of any Wayland stack. The real question is what
> you will do with the images in the Wayland compositor that has the
> remoting backend. The backend already has as direct access as possible
> to the buffers a Wayland client is rendering.

But that's not how it currently works.  If you use Weston with the RDP
back end, the application does not use the GPU.  When it obtains an EGL
context from the Weston compositor, it is given a software OpenGL renderer.


> Weston does not execute any OpenGL commands on behalf of apps, so no
> problem. :-)
> 
> How it works is that (and this is all hidden inside libEGL.so and you
> will not find any code for it apps, toolkits, Wayland, or compositors)
> the Wayland compositor tells the client-side which GPU to use (if even
> necessary), and the client-side (usually by itself) allocates the
> necessary buffers, uses GPU to render into them, and sends buffer
> handles to the compositor when the app calls eglSwapBuffers().

Again, not how it currently works when using Weston with the RDP back end.


More information about the wayland-devel mailing list