concerns over wayland and need for wayland app display to X Server

Bill Spitzak spitzak at gmail.com
Mon Feb 27 12:02:19 PST 2012


David Jackson wrote:

> How does a wayland app do that? How does a wayland app tell the GPU to 
> render a square 30x30 at coordinate 40x40? Keep in mind, there are many 
> different manufacturers of GPUs and each may have different bugs and 
> differences in the interface to the hardware, if not a completely 
> different hardware interface, and an application is not going to know 
> about every GPU that exists.

I hope this is reasonably accurate:

1. The Wayland client app uses a special memory allocator to get a 
buffer that the GPU can write to (I believe modern GPUs are being made 
to remove this memory distinction, as it is a pain to handle).

2. The Wayland app uses the GL api to create a "context" and sets it up 
to draw into the buffer.

3. The Wayland app makes the context "current". This is where the real 
trick gets done, the OS/Driver/whatever at this point locks things so no 
other task can use the GPU (or some subset of the many GPU's). The need 
to support this is the primary reason low-level graphics support is 
moving to the kernel, as it may be much more efficient to swap ownership 
of the GPU at the same time as the CPU, instead of relying on these 
contexts.

4. The Wayland app makes OpenGL calls, which linked libraries (often 
called "drivers"), running in the client's process, translate to actions 
on the GPU. These drivers are what hide differences between GPUs. The 
app is free to mess with the GPU directly (just like the CPU, some 
instructions will not work for non-privileged applications).

5. Client waits for the GPU to finish drawing, and may release the 
context (otherwise it will be released when the client blocks and 
another process wants the GPU).

6. Client sends command to wayland telling it to use the new image.


More information about the wayland-devel mailing list