Top-most windows
Deron Johnson
Deron.Johnson at Sun.COM
Tue Jan 10 09:37:30 PST 2006
Keith Packard wrote On 01/09/06 17:34,:
> On Mon, 2006-01-09 at 13:35 -0800, Deron Johnson wrote:
>
> There's already a giant lock around the server rendering operations; the
> graphics hardware is single threaded, and contexts are expensive to
> swap.
This is true only if multiple clients are rendering to the same surface,
such as the visible portion of the frame buffer. In the current LG
prototype, OpenGL renders to the screen while X renders to backing
pixmaps. Whether these pixmaps are in system memory or in device
VRAM, they are distinct from the screen so no locking occurs.
And even if VRAM pixmaps are used, and the graphics hardware is
single threaded (which is not necessarily the case for multipipe
configurations), the driver is usually capable of thread switching
at a very fine granularity, thus providing a measure of load
balancing on the pipeline. So it's not really a "giant lock."
Another problem with constantly raising the window is that you
are constantly thrashing doing DID preparation on all of the windows.
In addition, you've got normal X windows continually popping above
the compositing window and they will be visible as artifacts.
The grab/raise/ungrab approach is a complete non-starter.
> If the compositing manager really is spending 1/2 the frame time
> to repaint the screen, then you only get 1/2 for all other application
> drawing; there are no visible changes on the screen until painted by the
> compositing manager. I would hope that any reasonable compositing
> manager would consume far less of the system resources than this.
My point is that the portion of the frame that is consumed by the
compositing manager rendering is completely user configurable. The
user controls the scene complexity and also can tune the desired
frame rate.
> This doesn't rule out a pseudo root
> extension, which has already been proven to work with existing X
> applications just fine. The issue with the existing pseudo root
> extension is more that it requires the compositing manager be started
> before all other applications, and creates a separate X connection (:1)
> to distinguish between the 'real' and 'pseudo' root connection
> information blocks.
But it still doesn't solve the DID rendering problem. Even if you
have a double-buffered pseudo-root its children (who are typically
single buffered) will still be rendered with a different DID than
than the pseudo-root parent. This results in weird-looking "holes"
on the screen.
> We already have a 'I rule the screen real estate mode', it's called
> OverrideRedirect rendering to the Root window. We're really trying to
> discover some GLX-specific kludge-around here, it's not a general
> problem with the current architecture. With this in mind, fixes to GLX
> should be considered more in-scope than general changes to the window
> system.
I think you mean "IncludeInferiors rendering," don't you? Unfortunately,
this only "rules the screen" spatially. It doesn't rule the screen
visually. The DIDs of the child windows are still painted, resulting in
visual artifacts.
More information about the xorg-arch
mailing list