Unresponsive applications

William Swanson swansontec at gmail.com
Thu Sep 22 16:55:56 PDT 2011


On Thu, Sep 22, 2011 at 3:22 PM, Bill Spitzak <spitzak at gmail.com> wrote:
> That copied the buffer, thus all the drawing was done by the compositor, so
> of course it works. What is needed is a method such that either both draw
> into the same buffer or the compositor composites the two images together.

I think we are misunderstanding each other. The pseudo-code
"copy_buffer" method performs the final compositing to screen; there
are no additional copies. Specifically, it alpha-blends
client-provided shared-memory RGBA window buffer into the final RGB
desktop buffer that gets swapped directly to screen. Clients aren't
directly rendering to screen, so this blending is the whole point of
even having a compositor.

In my pseudocode, the compositor draws the window border directly into
the final composited desktop without "storing" it anywhere. In other
words, the compositor might have to re-draw the window borders up to
60 times per second if that's how fast the desktop is changing. This
should be no more resource intensive than normal compositing,
especially if the various elements like widgets and title text are
pre-cached as textures and simply blended together on the GPU. Compiz
and Emerald prove that compositors are quite capable of rendering
complex effects on every frame.

> There is additional latency in that the compositor cannot start drawing the
> border until after the client has drawn the contents. A client could draw
> the border and contents at the same time (for instance in multiple threads,
> but also just the setup and teardown of rendering contexts can be shared in
> a single-threaded client).

Yes, letting the client draw its own border would save the effort of
re-drawing the border every fame. But then again, re-drawing the
border every frame should be no more effort than normal compositing.
Maybe the additional texture loads in the border-drawing code add a
slight performance hit.

> The blurry pixels do not have to be in the buffer, therefore they can be
> generated by the compositor and the client can still draw the
> partially-transparent border. This is in fact how Windows works. On the
> modern machines the graphics processor can directly blur sections of the
> composited image before another image is composited atop.

I agree that this works, but see how the compositor is now involved in
doing border-esque stuff (blurring pixels)?

As I said in the last email, I still don't know whether or not letting
the compositor render the borders is a good idea. One way lets you
have client innovations like Google Chrome, and the other way allows
you to have compositor innovations like XMonad. The performance and
latency will be roughly the same either way, so there is no technical
argument for one vs the other.

Even if the compositor draws the window borders, for example, a
non-responsive window still won't resize properly. The compositor sees
the drag and informs the client, "You should resize yourself." Until
the client responds with a new buffer, however, the compositor is
stuck drawing the window border in the same non-resized location
(otherwise it would tear). Thus, compositor-side borders behave no
better than client-side borders, and a lot of the arguments in favor
of compositor-side borders still make no sense.

-William Swanson


More information about the wayland-devel mailing list