DRI2 Design Wiki Page
krh at bitplanet.net
Thu Oct 4 10:45:11 PDT 2007
On 10/4/07, Keith Packard <keithp at keithp.com> wrote:
> On Thu, 2007-10-04 at 01:27 -0400, Kristian Høgsberg wrote:
> > There is an issue with the design, though, related to how and when the
> > DRI driver discovers that the front buffer has changed (typically
> > resizing).
> Why would the rendering application even need to know the size of the
> front buffer? The swap should effectively be under the control of the
> front buffer owner, not the rendering application.
Ok, I phrased that wrong: what the DRI driver needs to look out for is
when size of the rendering buffers change. For a redirected window,
this does involve resizing the front buffer, but that's not the case
for a non-redirected window. The important part, though, is that the
drawable size changes and before submitting rendering, the DRI driver
has to allocate new private backbuffers that are big enough to hold
> As far as figuring out how big to make the rendering buffers, that's
> outside the scope of DRM in my book. The GLX interface can watch for
> ConfigureNotify events on the associated window and resize the back
> buffers as appropriate.
I guess you're proposing libGL should transparently listen for
ConfigureNotify events? I don't see how that can work, there is no
guarantee that an OpenGL application handles events. For example,
glxgears without an event loop, just rendering. If the rendering
extends outside the window bounds and you increase the window size,
the next frame should include those parts that were clipped by the
window in previous frames. X events aren't reliable for this kind of
And regardless, the issue isn't so much how to get the resize
notification from the X server to the direct rendering client, but
rather that the Gallium design doesn't expect these kinds of
interruptions while rendering a frame. So while libGL (or AIGLX) may
be able to notice that the window size changed, what I'm missing is a
mechanism to ask the DRI driver to reallocate its back buffers.
> With Composite, we never resize pixmaps, we leave the old ones around
> and create new ones in the new size. When the last use of the old object
> is finished, the old pixmap is cleaned up. This means that applications
> don't have to synchronize their use of the pixmap to potential window
> size changes.
Sure, I understand.
> Moving cliprects and buffer tracking into the kernel eliminates
> the need for an SAREA. The clip rects are only needed on swap
> buffers; this is done by the kernel, which always has the latest
> clip rects.
> You'll still need to lock the swap out while the X server is busy
> recomputing the clip lists and repainting the rest of the screen. Which
> means we'll need some kind of lock on the front buffer that the vsync
> thread grabs before looking at the clip lists.
Hmm, true. But that's at least isolated to the DRI module in the X
server. Nothing else in userspace will need to take this lock.
More information about the xorg