egl streams, trying to gain some knowledge
daniel at fooishbar.org
Thu Apr 2 09:39:24 PDT 2015
My two cents, which largely parallels Jason's ...
On 2 April 2015 at 08:35, Dave Airlie <airlied at gmail.com> wrote:
> So nvidia have indicated they would like to use an EGLstreams based
> solution to enable wayland on their binary driver stack at some point.
No real surprise, I guess. NVIDIA have long pushed EGLStreams, but we
already _have_ an EGL API for Wayland (wl_display + wl_surface +
wl_egl_surface), and we're not going to change it.
> I'm just trying to gauge how people in mesa/wayland feel about this as
> a solution, is it a solution looking for a problem, when you have
> EGLstreams everything looks like a nail type situation etc
I can see the attraction of the idea, and I can see why you'd want to
pick a display API to extend to avoid the pain of interfacing clients,
display servers, media content, etc, but EGL is really just not that
For the 3D client <-> server usecase, we already have native APIs for
those, and I don't really see that it adds anything over
eglSwapBuffers in that case, aside from perhaps helping
For the display server <-> display hardware usecase, firstly we
already have KMS, and secondly, the only thing Streams would buy is is
the ability to transparently pipe client content into a hardware
display pipeline (think overlay). But we've spent so long getting to
atomic and only just made it: why on earth would we throw that away
now? How is it even supposed to fit into an atomic display
configuration pipeline? Even if we decide to throw atomicity away,
what happens when we need to do mutually-dependent reconfiguration -
how do I know when an EGLOutput has actually been released by its
stream? Does it block until the relevant kernel/hw reconfiguration has
been completed (thus destroying atomicity), or does it return
immediately and just make you guess/poll at its status (which, tbh,
isn't really any better)?
For the media client <-> anything usecase, EGLStreams is useless as it
offers nothing in the way of timing other than a fixed latency, which
isn't good enough. What happens when I have to switch between doing a
blit pass and flipping directly to an overlay? What happens when I
move from my local laptop panel output to a HDMI output, which has an
intermediate sink which introduces 100ms of latency (something
queryable through CEA)? Saying 'the client should insert it at the
exact appropriate point' isn't good enough for media.
The biggest problem though, is that EGL just totally abrogates event
handling. For things like fence handling (and even buffer
release/handover), all it offers is query and wait-until-complete
APIs, rather than any kind of events or signalling. This is pretty
horrendous for both power usage and stable timing, and that alone
rules it out entirely from my point of view.
I've seen quite a few issues with the presentation side of things, but
I've never found myself wishing for more, rather than less, EGL. I
think gbm offers a good (if imperfect) model, and would like to see
more platforms allowing the user to control buffer allocation and
> Also if anyone has any idea if any other EGL vendors are heading down
> this road, or if this is a one-company extension, ratified to KHR
> because nobody objected.
I haven't seen it ever implemented, no. Maybe comparing it with OpenWF
is unfair, but maybe not. Either way, it's not going to happen for
Wayland, or at least if it does, it's not going to displace the
standard existing stable API/ABI.
More information about the wayland-devel