[PATCH 7/7] compositor-drm: Add support for EGLDevice+EGLOutput

Daniel Stone daniel at fooishbar.org
Tue Mar 22 11:20:47 UTC 2016


Hi,

On 21 March 2016 at 16:41, Miguel A. Vico <mvicomoya at nvidia.com> wrote:
> As previously stated, EGLDevice and EGLOutput will provide means
> to access native device objects and different portions of display
> control hardware respectively.
>
> Whenever EGL_EXT_device_drm extension is present, EGLDevice can
> be used to enumerate and access DRM KMS devices, and EGLOutputLayer
> to enumerate and access DRM KMS crtcs and planes.

EGLDevice isn't enumerating devices though: you still use udev to
enumerate DRM devices, and then walk the list of EGLDevices to find
the device you've already settled on. EGL_DRM_MASTER_FD_EXT then gets
used to inject an open connection back in to replace the device you've
already opened. Messy. :\

> By using EGLStreams and attaching an EGLOutputLayer consumer
> (representing a DRM KMS crtc or plane) to it, compositor-drm can
> produce final composition frames and present them on a DRM device.

It's gl-renderer which produces the frames, really.

> This change adds required logic to support presentation through
> EGLDevice+EGLOutput+EGLStream. Whether GBM or EGLDevice should be
> used can be controlled by --use-egldevice backend argument.

Should this not be automatic, i.e. use gbm if platform_gbm is present
and EGLDevice if gbm isn't present but the other extensions are?

> @@ -531,17 +540,21 @@ drm_output_render_gl(struct drm_output *output, pixman_region32_t *damage)
>         output->base.compositor->renderer->repaint_output(&output->base,
>                                                           damage);
>
> -       bo = gbm_surface_lock_front_buffer(output->gbm_surface);
> -       if (!bo) {
> -               weston_log("failed to lock front buffer: %m\n");
> -               return;
> -       }
> +       if (b->use_egldevice)
> +               output->next = output->dumb[0];

Huh?

I'm assuming you use two (never actually used) dumb BOs as
output->{current,next} to fit in with compositor-drm's bookkeeping,
whilst Streams handles the actual buffer management for you, right?
That seems like quite a hack.

>  /* When initializing EGL, if the preferred buffer format isn't available
>   * we may be able to susbstitute an ARGB format for an XRGB one.

This seems like it doesn't apply
 changed the logic to al
> @@ -1594,38 +1642,59 @@ fallback_format_for(uint32_t format)
>  static int
>  drm_backend_create_gl_renderer(struct drm_backend *b)
>  {
> -       EGLint format[3] = {
> +       EGLint platform_attribs[] = {
> +               EGL_DRM_MASTER_FD_EXT, b->drm.fd,
> +               EGL_NONE
> +       };

Seems like this should be called device_platform_attribs, since
they're only used in the EGLDevice case, not gbm.

> +       EGLint format[] = {
>                 b->gbm_format,
>                 fallback_format_for(b->gbm_format),
> -               0,
> +               0

Unrelated changes to the format array here.

> +       if (b->use_egldevice) {
> +               b->egldevice = create_egldevice(b->drm.filename);
> +               if (b->egldevice == EGL_NO_DEVICE_EXT)
> +                       return -1;
> +       } else {
> +               b->gbm = create_gbm_device(b->drm.fd);
> +               if (!b->gbm)
> +                       return -1;
> +       }
> +
>         if (drm_backend_create_gl_renderer(b) < 0) {
> -               gbm_device_destroy(b->gbm);
> +               if (b->gbm)
> +                       gbm_device_destroy(b->gbm);

Shouldn't the egldevice be destroyed here too?

> +       if (b->use_egldevice) {
> +               int w = output->base.current_mode->width;
> +               int h = output->base.current_mode->height;
>
> -       if (format[1])
> -               n_formats = 2;
> -       if (gl_renderer->output_window_create(&output->base,
> -                                      (EGLNativeWindowType)output->gbm_surface,
> -                                      output->gbm_surface,
> -                                      gl_renderer->opaque_attribs,
> -                                      format,
> -                                      n_formats) < 0) {
> -               weston_log("failed to create gl renderer output state\n");
> -               gbm_surface_destroy(output->gbm_surface);
> -               return -1;
> -       }
> +               /* Create a dumb fb for modesetting */
> +               output->dumb[0] = drm_fb_create_dumb(b, w, h);
> +               if (!output->dumb[0]) {
> +                       weston_log("failed to create dumb framebuffer\n");
> +                       return -1;
> +               }

Seems I was correct earlier - if it's being passed to drmModeSetCrtc
though, this will mean that we flash up some completely unknown
content when starting. Not pretty.

> +       } else {
> +               EGLint format[2] = {
> +                       output->gbm_format,
> +                       fallback_format_for(output->gbm_format),
> +               };
> +               int i, flags, n_formats = 1;
> +
> +               output->gbm_surface = gbm_surface_create(b->gbm,
> +                                         output->base.current_mode->width,
> +                                         output->base.current_mode->height,
> +                                         format[0],
> +                                         GBM_BO_USE_SCANOUT |
> +                                         GBM_BO_USE_RENDERING);
> +               if (!output->gbm_surface) {
> +                       weston_log("failed to create gbm surface\n");
> +                       return -1;
> +               }
>
> -               output->gbm_cursor_bo[i] =
> -                       gbm_bo_create(b->gbm, b->cursor_width, b->cursor_height,
> -                               GBM_FORMAT_ARGB8888, flags);
> -       }
> +               if (format[1])
> +                       n_formats = 2;

This needs to be rebased; 6d556374 changed the format-selection logic.

Cheers,
Daniel


More information about the wayland-devel mailing list