Remote display with 3D acceleration using Wayland/Weston

Emil Velikov emil.l.velikov at gmail.com
Fri Feb 24 16:25:03 UTC 2017


On 24 February 2017 at 09:36, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> On Thu, 23 Feb 2017 17:51:24 -0600
> DRC <dcommander at users.sourceforge.net> wrote:
>
>> On 12/15/16 3:01 AM, Pekka Paalanen wrote:
>> > The current RDP-backed is written to set up and use only the Pixman
>> > renderer. Pixman renderer is a software renderer, and will not
>> > initialize EGL in the compositor. Therefore no support for hardware
>> > accelerated OpenGL gets advertised to clients, and clients fall back to
>> > software GL.
>> >
>> > You can fix this purely by modifying libweston/compositor-rdp.c file,
>> > writing the support for initializing the GL-renderer. Then you get
>> > hardware accelerated GL support for all Wayland clients without any
>> > other modifications anywhere.
>> >
>> > Why that has not been done already is because it was thought that
>> > having clients using hardware OpenGL while the compositor is not cannot
>> > be performant enough to justify the effort. Also, it pulls in the
>> > dependency to EGL and GL libs, which are huge. Obviously your use case
>> > is different and this rationale does not apply.
>> >
>> > The hardest part in adding the support to the RDP-backend is
>> > implementing the buffer content access efficiently. RDP requires pixel
>> > data in system memory so the CPU can read it, but GL-renderer has all
>> > pixel data in graphics memory which often cannot be directly read by
>> > the CPU. Accessing that pixel data requires a copy (glReadPixels), and
>> > there is nowadays a helper: weston_surface_copy_content(), however the
>> > function is not efficient and is so far meant only for debugging and
>> > testing.
>>
>> I am attempting to modify the RDP backend to prove the concept that
>> hardware-accelerated OpenGL is possible with a remote display backend,
>> but my lack of familiarity with the code is making this very
>> challenging.  It seems that the RDP backend uses Pixman both for GL
>> rendering and also to maintain its framebuffer in main memory
>> (shadow_surface.)  Is that correct?  If so, then it seems that I would
>> need to continue using the shadow surface but use gl_renderer instead of
>> the Pixman renderer, then implement my own method of transferring pixels
>> from the GL renderer to the shadow surface at the end of every frame (?)
>
> That is pretty much the case, yes. I suppose you could also just let
> GL-renderer maintain the framebuffer to only read it out for
> transmission rather than maintaining a shadow copy, but the difference
> is mostly just conceptual.
>
>>  I've been trying to work from compositor-wayland.c as a template, but
>> it's unclear how everything connects, which parts of that code I need in
>> order to implement hardware acceleration, and which parts are
>> unnecessary.  I would appreciate it if someone who has familiarity with
>> the RDP backend could give me some targeted advice.
>
> I cannot help with the RDP-specifics.
>
> Since this compositor is essentially headless in the local machine, you
> would want to use DRM render nodes instead of KMS nodes for accessing
> the GPU. The KMS node would be reserved by any display server running
> for the local monitors.
>
> You would initialize EGL somehow to use a render node. I can't really
> provide a good suggestion for an architecture off-hand, but maybe these
> could help:
> https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_platform_device.txt
> https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_platform_gbm.txt
>
FYI:

One can use EGL_EXT_device_drm to get the master fd, but we need
another extension for the render.
I've got some work on the topic - both EGL Device in mesa and the new
extension - need to see if I can finish it these days.

-Emil


More information about the wayland-devel mailing list