[PATCH weston 07/11] gl-renderer: Always setup gl-renderer
Armin Krezović
krezovic.armin at gmail.com
Wed Jun 22 10:05:01 UTC 2016
On 21.06.2016 11:13, Pekka Paalanen wrote:
> On Sat, 18 Jun 2016 19:15:22 +0200
> Armin Krezović <krezovic.armin at gmail.com> wrote:
>
>> Currently, the gl-renderer setup is being done on per-output
>> basis. This isn't desirable when trying to make weston run
>> with zero outputs.
>>
>> When there are no outputs present, there is no EGLContext or
>> EGLCurrent available, and trying to use an EGL extension will
>> result in a crash.
>
> Hi Armin,
>
> what is that EGLCurrent you also mentioned in your blog?
>
> I'm not aware of any such object. If you refer to eglMakeCurrent(), it
> means "make the given EGLContext and the read and write EGLSurfaces
> current in this thread", which will affect all context-sensitive
> function calls, including everything of GL and several EGL functions,
> most notably eglSwapBuffers.
>
This is a misinterpretation from my side. It happens when you spend
too much time looking at weston's surface/view mapping code :).
After your brief explanation on IRC, I've updated my blog post
and will update this description too for the next round.
>> The problem is solved by creating a dummy surface at startup,
>> either by using EGL_KHR_surfaceless_context or PbufferSurface
>> to setup a context, and creating an EGLContext and EGLCurrent
>> with the help of dummy surface.
>
> Surfaceless is not really creating a dummy surface. ;-)
>
Heh, yeah. I'll change it to change that to "EGL_NO_SURFACE or
PbufferSurface". Does that make more sense?
>> When an output gets plugged in, its configuration will replace
>> dummy buffer usage as the setup will be run again.
>
> Hmm, re-running setup does not sound right, but let's see below.
>
This was again a misunderstanding and an oversight from my side.
gl_renderer_setup is just ran once. gl_renderer_create_output
won't run it if there's a context present.
>> Signed-off-by: Armin Krezović <krezovic.armin at gmail.com>
>> ---
>> src/gl-renderer.c | 32 ++++++++++++++++++++++++++++++++
>> 1 file changed, 32 insertions(+)
>>
>> diff --git a/src/gl-renderer.c b/src/gl-renderer.c
>> index 23c0cd7..7e2a787 100644
>> --- a/src/gl-renderer.c
>> +++ b/src/gl-renderer.c
>> @@ -175,6 +175,8 @@ struct gl_renderer {
>> EGLContext egl_context;
>> EGLConfig egl_config;
>>
>> + EGLSurface dummy_surface;
>> +
>> struct wl_array vertices;
>> struct wl_array vtxcnt;
>>
>> @@ -201,6 +203,8 @@ struct gl_renderer {
>>
>> int has_configless_context;
>>
>> + int has_surfaceless_context;
>> +
>> int has_dmabuf_import;
>> struct wl_list dmabuf_images;
>>
>> @@ -2654,6 +2658,8 @@ gl_renderer_destroy(struct weston_compositor *ec)
>> wl_list_for_each_safe(image, next, &gr->dmabuf_images, link)
>> dmabuf_image_destroy(image);
>>
>> + eglDestroySurface(gr->egl_display, gr->dummy_surface);
>
> I think this call needs to be conditional, since you are not guaranteed
> to create the dummy surface. EGL 1.4 says for eglDestroySurface: "An
> EGL_BAD_SURFACE error is generated if surface is not a valid rendering
> surface."
>
I didn't see the spec mention anything about the behavior when
surfaceless context is available and EGL_NO_SURFACE is being used.
In case of pbuffersurface, the renderer would terminate anyways if
it got an invalid surface from eglCreatePbufferSurface.
>> +
>> eglTerminate(gr->egl_display);
>> eglReleaseThread();
>>
>> @@ -2765,6 +2771,9 @@ gl_renderer_setup_egl_extensions(struct weston_compositor *ec)
>> gr->has_configless_context = 1;
>> #endif
>>
>> + if (check_extension(extensions, "EGL_KHR_surfaceless_context"))
>> + gr->has_surfaceless_context = 1;
>> +
>
> Good.
>
>> #ifdef EGL_EXT_image_dma_buf_import
>> if (check_extension(extensions, "EGL_EXT_image_dma_buf_import"))
>> gr->has_dmabuf_import = 1;
>> @@ -2881,6 +2890,12 @@ gl_renderer_create(struct weston_compositor *ec, EGLenum platform,
>> EGLint major, minor;
>> int supports = 0;
>>
>> + const EGLint pbuffer_attribs[] = {
>> + EGL_WIDTH, 10,
>> + EGL_HEIGHT, 10,
>> + EGL_NONE
>> + };
>
> This attrib list is fine. Could be static, too.
>
Is it necessary? If so, I can add it, no big deal.
>> +
>> if (platform) {
>> supports = gl_renderer_supports(
>> ec, platform_to_extension(platform));
>> @@ -2956,6 +2971,23 @@ gl_renderer_create(struct weston_compositor *ec, EGLenum platform,
>> if (gr->has_dmabuf_import)
>> gr->base.import_dmabuf = gl_renderer_import_dmabuf;
>>
>> + if (gr->has_surfaceless_context) {
>> + weston_log("EGL_KHR_surfaceless_context available\n");
>> + gr->dummy_surface = EGL_NO_SURFACE;
>> + } else {
>> + weston_log("EGL_KHR_surfaceless_context unavailable "
>> + "trying PbufferSurface\n");
>> + gr->dummy_surface = eglCreatePbufferSurface(gr->egl_display,
>> + gr->egl_config,
>> + pbuffer_attribs);
>
> You will need to find a new EGLConfig for this call, one that is
> indicated suitable with EGL_PBUFFER_BIT. IOW, you need the same set as
> gl_renderer_opaque_attribs except EGL_SURFACE_TYPE has to be
> EGL_PBUFFER_BIT. You can call egl_choose_config() with visual_id NULL
> to find one, or eglChooseConfig() directly to just get the first one.
> We don't really care what the config looks like, as long as we can make
> a pbuffer with it.
>
> The gr->egl_config is only suitable the kinds of outputs the backend
> will be using, and I don't think that ever includes EGL_PBUFFER_BIT.
> You can get lucky, of course. Some configs work for both windows and
> pbuffers.
Okay, that makes sense.
>
>> + if (gr->dummy_surface == EGL_NO_SURFACE) {
>> + weston_log("failed to create dummy pbuffer\n");
>> + goto fail_terminate;
>
> Why not fail_with_error?
>
Good question :). I'm not sure. I'll change it to fail_with_error.
>> + }
>
> It might make sense to split this path out into a new function, as you
> will need a temporary EGLConfig and then the pbuffer_attribs can be
> local there too.
>
>> + }
>> +
>> + gl_renderer_setup(ec, gr->dummy_surface);
>> +
>
> This would be better just before the return statement, so gr is fully
> initialized.
>
Alright, makes sense.
> You forgot to check it succeeds.
>
... yeah. What a silly mistake. Thanks for pointing that out.
>> wl_display_add_shm_format(ec->wl_display, WL_SHM_FORMAT_RGB565);
>>
>> wl_signal_init(&gr->destroy_signal);
>
> Finally, you forgot to remove the gl_renderer_setup() call from
> gl_renderer_output_create(). That call is now dead, because
> gr->egl_context is guaranteed to be set.
>
Yes, that makes sense.
> If you want extra fun, you could check if dynamic renderer switching
> still works. ;-)
>
> (Start DRM-backend with --use-pixman, then pressing Mod+Shift+Space,
> 'w' should make Weston switch to the GL-renderer. If it doesn't work
> even before your patch, then don't worry about it.)
I'll try that when I get to test the drm backend. Currently all my
work is based on X11 backend, as I have no sane way to unplug all
outputs on drm backend.
>
> The overall logic in your patch looks good.
>
>
> Thanks,
> pq
>
Thank you for the review!
Armin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 855 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20160622/9b9849ac/attachment-0001.sig>
More information about the wayland-devel
mailing list