Introduction and updates from NVIDIA
James Jones
jajones at nvidia.com
Mon May 16 18:12:35 UTC 2016
On 05/16/2016 02:36 AM, Daniel Vetter wrote:
> On Sat, May 14, 2016 at 05:46:51PM +0100, Daniel Stone wrote:
>> On 12 May 2016 at 00:08, James Jones <jajones at nvidia.com> wrote:
>>> The EGLStream encapsulation takes into consideration the new use cases
>>> EGLImage, GBM, etc. were intended to address, and restores what I believe to
>>> be the minimal amount of the traditional GL+GLX/EGL/etc. model, while still
>>> allowing as much of the flexibility of the "a bunch of buffers" mental model
>>> as possible. We can re-invent that with GBM API adjustments, a set of
>>> restrictions on how the buffers it allocates can be used, and another layer
>>> of metadata being pumped into drivers on top of that, but I suspect we'd
>>> wind up with something that looks very similar to streams.
>>
>> The only allocation GBM does is for buffers produced by the compositor
>> and used for scanout, so in this regard it's quite straightforward.
>> Client buffers are a separate topic, and I don't buy that the
>> non-Streams model precludes things like render compression. In fact,
>> Ben Widawsky, Dan Vetter, and some others are as we speak working on
>> support for render compression within both Wayland EGL and GBM itself
>> (for direct scanout from compressed buffers with an auxiliary plane).
>> So far, the only external impact has been a very small extension to
>> the GBM API to allow use of multiple planes and FB modifiers: a far
>> smaller change than implementing the whole of Streams and all its
>> future extensions (Switch et al).
>
> Just a quick correction: For render compression we also do need some
> allocation hinting interface, since on intel gpus you can't always scan
> out render compressed buffers. So exactly what EGLstreams tries to also
> solve (at least if my understanding is correct). So we need a bit more in
> gbm than just be able to pass fb modifiers around.
Yes, this, and it goes beyond just hinting at allocation time for us if
you intend to reconfigure the output without reallocating the surface
(E.g., switch to a different plane, start rotating the output, etc.).
> I still think it's the better approach though since it's still fairly
> incremental. And exposing the allocation hints and making them explicitly
> will avoid the need to teach everything in the world about EGLstreams (vk,
> v4l, drm, ...). Which as Daniel Stone pointed out, doesn't really work
> well if you have IP blocks from multiple vendors on your SoC.
> -Daniel
Yeah, IP blocks from multiple vendors are hard. I don't see how they're
any harder with streams though Vs. the alternate GBM-based proposals
that have been suggested thus far. We're not entirely immune to this at
NVIDIA. Sometimes we want to present to an Intel display engine, for
example. An EGL-based solution doesn't necessarily mean a single
vendor's EGL driver (GLVND is coming, slowly), and even if it does, it
only requires explicit cooperation if both vendors share some more
optimal layout than basic pitch-linear with minimal alignment
requirements and whatnot, no compression, either fully-coherent caches
or no caching.
However, there are two ways to solve this:
-Always resort to the lowest common denominator when the
producer/consumer aren't from the same vendor, as mentioned above.
-Have some sort of coordination, either handled by the application and a
bunch of capability bits, or handled by a driver<->driver API below the
level of the application API.
Neither of these seem specific to either a streams-based or EGL-based
solution to me. The important part is to standardize the interfaces
exposed to applications or drivers to coordinate the right formats.
As to needing to teach everything about EGLStreams, I think there's a
misconception that this means every component vendor needs to get on the
EGL bandwagon and start writing a bunch of no-op eglGetConfig() entry
points and whatnot. Even with all our in-house IP, that's not the case
at NVIDIA. Our media codecs aren't baked into the same driver module as
our OpenGL drivers for example, and the drivers and engineers
maintaining them know very little about eachother. Our EGL driver
allows stream producers/consumers to plug into it using some
internal-standard interfaces and a relatively minimal amount of code,
and without even including any Khronos EGL headers.
The current Khronos EGL API doesn't need to be the only interface
through which drivers plug in to a libEGL or vendor EGL implementation.
The proposal to expose a vendor-agnostic set of hooks to allow writing
EGL platform implementations without EGL vendor involvement is one
example of a non-application facing EGL API. EGLStream producer and
consumer hooks could be handled with another non-application facing API.
> As Kristian says, I really don't see where the existing non-Streams
> solutions, being GBM on the compositor side and private frame-based
> protocols between compositor and client, leave you unable to reach
> full performance potential. Do you have any concrete usecases that you
> can point to in as much detail as possible, outlining exactly how the
> GBM/private-Wayland-protocol model forces you to compromise
> performance?
Unfortunately, the only realistic way to get to the full patchset is
incrementally. We haven't even finished the EGLSwitch extension, let
alone writing Weston code to use it. This is why I believe temporary
co-existance of the two paths is a reasonable path for now. Not all the
benefits of streams are demonstrable yet, nor is GBM in its final form.
Daniel Stone, I'd like to hear more about how you envision a GBM library
communicating with an EGL producer in a remote process. Would GBM be
sending wayland protocol directly? If so, this is really starting to
sound like streams-rewritten-using-wayland-protocol, and I don't think
wayland is the right domain to solve these non-wayland-specific issues
in. If, on the other hand, GBM is going to gain its own set of
per-vendor cross-process communication mechanisms, that really sounds
like a re-invention of EGLStreams.
Perhaps both of my assumed solutions above are way off the mark, and it
does seem like we're talking past eachother at times. It seems like you
have a pretty strong understanding of how this would all work in GBM,
even if it's not there in the code yet. I understand you're quite busy,
but perhaps we could have a brief real-time communication session (IRC?
Phone?) where we can talk through some of your ideas for GBM, so we
can at least start from the same basic understanding when talking about
this stuff. Let me know if you want to schedule something.
Thanks,
-James
More information about the wayland-devel
mailing list