How to create pixmap or pixmap surface in wayland?

Pekka Paalanen ppaalanen at gmail.com
Fri Feb 8 02:33:41 PST 2013


On Fri, 8 Feb 2013 02:16:39 +0000
"Wang, Quanxian" <quanxian.wang at intel.com> wrote:

> Sorry Pq. I confused you so much.
> 
> Just one requirement not related with architecture. You can imagine write a simple program for example glxgears. No compositor manager, no architecture, no window manager. Just has x server, or compositor server(weston).
> 
> I want to call eglCreatePixmapSurface to create a pixmap surface. If the xserver is backend, I need to connect with X server and create a pixmap and then create the pixmap surface using egl interface provided by mesa.
> If the backend is weston compositor, I need to connect with weston compositor and using egl to create the pixmap and its surface. Before that the context is created. Just like the code in simple-egl.c of weston.
> ===
> -- function init_egl()
> ...
>         display->egl.dpy = eglGetDisplay(display->display);
>         assert(display->egl.dpy);
> 
>         ret = eglInitialize(display->egl.dpy, &major, &minor);
>         assert(ret == EGL_TRUE);
>         ret = eglBindAPI(EGL_OPENGL_ES_API);
>         assert(ret == EGL_TRUE);
> 
>         ret = eglChooseConfig(display->egl.dpy, config_attribs,
>                               &display->egl.conf, 1, &n);
>         assert(ret && n == 1); 
> 
>         display->egl.ctx = eglCreateContext(display->egl.dpy,
>                                             display->egl.conf,
>                                             EGL_NO_CONTEXT, context_attribs);
>         assert(display->egl.ctx);
> ...
> ===
> 
> The current issue is pixmap of wayland is not supported in wayland protocol and mesa. I could not do this like X. 
> 
> Sorry my title and description make you confused, I change it to 'how to create pixmap or pixmap surface in wayland'.

Alright, thanks. Let's see.

First you need an EGL display. I would recommend to ask EFL or whatever
is creating the "main" Wayland connection for its struct wl_display
pointer, and pass that to eglGetDisplay(). This way you are not
creating a new Wayland connection, and your program will behave as a
single Wayland client in the compositor's perspective. This will let
you take advantage of sub-surfaces[1] in the future, once we get them in
a usable form.

I also assume there is no way to create an EGLDisplay without any
window system connection. At least on a standard Linux system, I think
you need to be authenticated in DRM, so ignoring the window system would
not work.

Next you have to choose how get a render target. There are couple of
options:
A. use the surfaceless EGL extension, and use a FBO for the rendering
B. create a dummy wl_surface, use it as the EGLSurface, and use FBO for
   real rendering
C. create a wl_subsurface, and just use it for rendering

In case A, depending on your EGL stack, you might not have the
extension available. I think Mesa always has it, but others might not.

Both A and B require you to use FBOs. That may or may not be a problem,
depending on how and what kind of an API you offer to e.g. webGL. If
the webGL code does glBindFramebuffer(GL_FRAMEBUFFER, 0), can you
intercept it and change it to activate your FBO?

The immediate downside of case C is that its not available just yet.
However, it would have considerable benefits:
- You get a real EGLSurface for rendering, no need for tricks with FBOs.
- No need to read back the rendering in the client, i.e. no
  glReadPixels or copying into the browser window image at client
  side. The rendering will go directly to the compositor into the
  sub-surface, and the compositor will combine it with the rest of your
  browser graphics.
- The sub-surface can use a different renderer than the main surface,
  i.e. the sub-surface can be draw with GL and main surface in
  software, or vice versa.
- Even the compositor may avoid copying the sub-surface contents, as
  the sub-surface might be assigned to a hardware overlay, that will
  scan it out as is, without compositing.

With cases A and B, that is with FBO, you will render into a temporary
buffer, and then you probably want to use that rendered image for
something, most likely as a part of a web page your browser is
rendering. So you either use glReadPixels (very slow) and draw the web
page in software, or you can use the FBO render target as a texture,
and draw the browser window in GL. After that, you send the image to
the compositor for display.

As you see, the context and architecture information is quite essential
to get a useful reply. You don't only want to create an off-screen
rendering target, you will also want to use the result for something.
How you want to use the result matters a great deal, especially for
performance.


Thanks,
pq

[1] http://lists.freedesktop.org/archives/wayland-devel/2012-December/006844.html


More information about the wayland-devel mailing list