[Mesa-dev] [PATCH] mesa/st: Don't modify the context draw/read buffers.
jfonseca at vmware.com
Fri Dec 9 00:58:48 PST 2011
----- Original Message -----
> On Fri, Dec 9, 2011 at 2:02 PM, Chia-I Wu <olv at lunarg.com> wrote:
> > On Thu, Dec 8, 2011 at 10:00 PM, <jfonseca at vmware.com> wrote:
> >> From: José Fonseca <jfonseca at vmware.com>
> >> It sets the wrong values (GL_XXX_LEFT instead of GL_XXX), and no
> >> other
> >> Mesa driver does this, given that Mesa sets the right draw/read
> >> buffers
> >> provided the Mesa visual has the doublebuffer flag filled
> >> correctly
> >> which is the case.
> > In EGL, when an EGLSurface is created, users can specify whether
> > the
> > front or back buffer will be rendered to. The function is used to
> > make a double-buffered context work with an EGLSurface whose front
> > buffer is supposed to be rendered to. But I admit that the
> > function
> > is hacky.
> and may be wrong for GL. It is ok for GLES because GLES does not
> GL_DRAW_BUFFER thus the value can be modified.
> > Since this is brought up, I did this experiment some time ago:
> It was done using GLX and GL.
> > 1. create a single-buffered drawable
> > 2. create a context with a GLX_DOUBLEBUFFER visual
> > 3. make the context and drawable current
> > 4. query GL_DRAW_BUFFER
> > Mesa returned GL_BACK and nVidia's proprietary driver returned
> > GL_FRONT. This difference, IMHO, comes from that Mesa uses the
> > visual
> > of the context to determine whether the context is double-buffered
> > or
> > single-buffered, while nVidia uses the visual of the drawable to
> > make
> > the decision (and at the time when the context is first made
> > current).
> > What I want to argue here is that, maybe there should be no
> > single-buffered or double-buffered contexts, but single-buffered or
> > double-buffered drawables. Or more precisely, the type of the
> > context
> > should be determined by the type of the current drawable. I
> > checked
> > GLX spec and it seemed that GLX_DOUBLEBUFFER applies for drawables.
> > Since GL 3.0, GL_DOUBLEBUFFER is also listed as one of the
> > framebuffer
> > dependent values. That implies the state may change when the
> > current
> > drawable changes. So it is still a correct behavior for the
> > drawable
> > to determine the type of the context.
> > I did not have a chance to look deeper into this due to the lack of
> > time. So I may be terribly wrong here...
Thanks for the explanation.
You make a valid point: intuitively, double-buffering is a property of drawables and not contexts. But the specs seem to maintain the view that it is also a property of contexts:
- glDrawBuffer man page talks about "single-buffered contexts" and "double-buffered contexts" and not drawables 
- glXMakeCurrent says that "BadMatch is generated if drawable was not created with the same X screen and visual as ctx" , therefore mixing single with double buffer context as you did is non-standard behavior -- glxMakeCurrent should had returned BadMatch. I know that on Windows this is enforced by the MS OpenGL runtime, and when apps want to mix double and single bufferd render they need to use double buffered pixelformats, and use GL_FRONT for
However eglMakeCurrent is indeed a bit more lenient, as the spec says "If draw or read are not compatible with ctx, then an EGL_BAD_MATCH error
is generated.", where the definition "compatible" is not really spelled out.
But at the end of the day I feel that:
a) mixing single- double- buffered drawables/visuals is non-standard, probably seldom used, and not worth spending much time on it, and if we do it is better to find a solution in Mesa core for all drivers
b) GL_(FRONT|BACK)_LEFT is definitely wrong -- it should be either GL_FRONT or BACK -- specially when the drawable/context are consistent on this regard which is the common case
More information about the mesa-dev