[Mesa-dev] [PATCH 3/4] dri2: Don't call the dri2 flush hook for swapbuffers unless we have a context.

Ian Romanick idr at freedesktop.org
Tue Feb 22 15:04:07 PST 2011

Hash: SHA1

On 02/22/2011 01:55 PM, Eric Anholt wrote:
> On Tue, 22 Feb 2011 15:07:45 -0500, Kristian Høgsberg <krh at bitplanet.net> wrote:
>> On Tue, Feb 22, 2011 at 2:07 PM, Ian Romanick <idr at freedesktop.org> wrote:
>>> Hash: SHA1
>>> On 02/21/2011 02:41 PM, Eric Anholt wrote:
>>>> The driver only has one reasonable place to look for its context to
>>>> flush anything, which is the current context.  Don't bother it with
>>>> having to check.
>>> There are some odd interactions here, but I don't completely recall the
>>> details.  Kristian implemented this function in this way for a specific
>>> reason.  It was either to deal with glXSwapBuffers when no context was
>>> current or to deal with glXSwapBuffers on a drawable that isn't bound to
>>> a context.  Otherwise the flush method would have been associated with
>>> the context (instead of with the screen).
>> The background is that tiling/deferred rendering architectures would
>> like to ignore glFlush() and only really flush when the user hits
>> *SwapBuffer.  Since the driver doesn't actually see the swapbuffer
>> call (it goes to the X server, Wayland or the pageflip ioctl), there
>> has to be a way for the loader to tell the driver "ok, really flush
>> now".  That's what this entry point is for.
> The glFlush() skipping hack never seemed appropriate to me.  Sure,
> glFlush() is expensive (it's massively expensive for us too, just not

Flushes are a complete disaster for tiled and deferred rendering because
it prevents all of the optimizations those architectures provide.  Of
course, so does OQ and conditional rendering.

> quite as bad apparently), but the spec is awfully clear to my reading:
> "all commands that have previously been sent to the GL must complete in
> finite time."  So, for example, if I glFlush() then wait just shy of
> infinite time and remap a BO that was in use for that rendering, I'd
> better not block waiting for the rendering to complete.

This is one of those cases where I wish the GL spec would say what it
means instead of describing what it means.  glFlush really only exists
for frontbuffer rendering and indirect rendering.

In indirect rendering, the client-side library buffers a big pile of
commands, and they might sit in that buffer forever.  When you call
glFlush, those commands get sent to the server and stuff happens...
eventually.  This is especially important for multicontext.  If a
glTexParameterf command is still sitting in the buffer of context A, it
is impossible for context B to observe that state change.  Once the
command is flushed to the server, the other context can observe it.

For direct rendering, there's little point.  You don't have the same
multicontext problem, and the only way you can observe that rendering
happened is via a query or swapbuffers (ignoring frontbuffer rendering).

> Could the glFlush() skipping affect real applications?  I'm not sure if
> ARB_sync or OQ users do glFlush()es after OQs they know they're going to
> use soon, but it's something I wouldn't be surprised by.  Right now in
> our driver we're flushing immediately after a sync or OQ end to get
> results sooner, but I sometimes wonder if that's the right thing to be
> doing.

I believe that all queries have an implicit flush.  I think that's the
only way it could work with indirect rendering.  Right?
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/


More information about the mesa-dev mailing list