[Mesa-dev] [PATCH] mesa/st: Don't modify the context draw/read buffers.

Jose Fonseca jfonseca at vmware.com
Fri Dec 9 07:25:18 PST 2011



----- Original Message -----
> On 12/09/2011 01:58 AM, Jose Fonseca wrote:
> > ----- Original Message -----
> >> On Fri, Dec 9, 2011 at 2:02 PM, Chia-I Wu<olv at lunarg.com>  wrote:
> >>> On Thu, Dec 8, 2011 at 10:00 PM,<jfonseca at vmware.com>  wrote:
> >>>> From: José Fonseca<jfonseca at vmware.com>
> >>>>
> >>>> It sets the wrong values (GL_XXX_LEFT instead of GL_XXX), and no
> >>>> other
> >>>> Mesa driver does this, given that Mesa sets the right draw/read
> >>>> buffers
> >>>> provided the Mesa visual has the doublebuffer flag filled
> >>>> correctly
> >>>> which is the case.
> >>> In EGL, when an EGLSurface is created, users can specify whether
> >>> the
> >>> front or back buffer will be rendered to.  The function is used
> >>> to
> >>> make a double-buffered context work with an EGLSurface whose
> >>> front
> >>> buffer is supposed to be rendered to.  But I admit that the
> >>> function
> >>> is hacky.
> >> and may be wrong for GL.  It is ok for GLES because GLES does not
> >> have
> >> GL_DRAW_BUFFER thus the value can be modified.
> >>
> >>> Since this is brought up, I did this experiment some time ago:
> >> It was done using GLX and GL.
> >>>   1. create a single-buffered drawable
> >>>   2. create a context with a GLX_DOUBLEBUFFER visual
> >>>   3. make the context and drawable current
> >>>   4. query GL_DRAW_BUFFER
> >>>
> >>> Mesa returned GL_BACK and nVidia's proprietary driver returned
> >>> GL_FRONT.  This difference, IMHO, comes from that Mesa uses the
> >>> visual
> >>> of the context to determine whether the context is
> >>> double-buffered
> >>> or
> >>> single-buffered, while nVidia uses the visual of the drawable to
> >>> make
> >>> the decision (and at the time when the context is first made
> >>> current).
> >>>
> >>> What I want to argue here is that, maybe there should be no
> >>> single-buffered or double-buffered contexts, but single-buffered
> >>> or
> >>> double-buffered drawables.  Or more precisely, the type of the
> >>> context
> >>> should be determined by the type of the current drawable.  I
> >>> checked
> >>> GLX spec and it seemed that GLX_DOUBLEBUFFER applies for
> >>> drawables.
> >>> Since GL 3.0, GL_DOUBLEBUFFER is also listed as one of the
> >>> framebuffer
> >>> dependent values.  That implies the state may change when the
> >>> current
> >>> drawable changes.  So it is still a correct behavior for the
> >>> drawable
> >>> to determine the type of the context.
> >>>
> >>> I did not have a chance to look deeper into this due to the lack
> >>> of
> >>> time.  So I may be terribly wrong here...
> >
> > Thanks for the explanation.
> >
> > You make a valid point: intuitively, double-buffering is a property
> > of drawables and not contexts. But the specs seem to maintain the
> > view that it is also a property of contexts:
> >
> > - glDrawBuffer man page talks about "single-buffered contexts" and
> > "double-buffered contexts" and not drawables [1]
> >
> > - glXMakeCurrent says that "BadMatch is generated if drawable was
> > not created with the same X screen and visual as ctx" [2],
> > therefore mixing single with double buffer context as you did is
> > non-standard behavior -- glxMakeCurrent should had returned
> > BadMatch.  I know that on Windows this is enforced by the MS
> > OpenGL runtime, and when apps want to mix double and single
> > bufferd render they need to use double buffered pixelformats, and
> > use GL_FRONT for
> >
> > However eglMakeCurrent is indeed a bit more lenient, as the spec
> > says "If draw or read are not compatible with ctx, then an
> > EGL_BAD_MATCH error
> > is generated.", where the definition "compatible" is not really
> > spelled out.
> >
> > But at the end of the day I feel that:
> > a) mixing single- double- buffered drawables/visuals is
> > non-standard, probably seldom used, and not worth spending much
> > time on it, and if we do it is better to find a solution in Mesa
> > core for all drivers
> > b) GL_(FRONT|BACK)_LEFT is definitely wrong -- it should be either
> > GL_FRONT or BACK -- specially when the drawable/context are
> > consistent on this regard which is the common case
> >
> > Jose
> >
> > [1] http://www.opengl.org/sdk/docs/man/xhtml/glDrawBuffer.xml
> > [2]
> > http://www.talisman.org/opengl-1.1/Reference/glXMakeCurrent.html
> 
> I think core Mesa should be as flexible as possible when binding
> contexts to drawables, in terms of visuals/configs.  Leave it up to
> GLX/WGL/EGL/etc to enforce rules like Jose quoted.  The
> check_compatible() function in context.c should probably be
> (re)moved.
> 
> I think one solution here is to make initialization of the context's
> GL_DRAW_BUFFER and GL_READ_BUFFER state the responsibility of the
> context creator.  Then we could initialize the state according to the
> API (use the context's double-buffer state for GL/GLX, use the
> surface's double-buffer state for EGL).  As a fallback, upon the
> first
> make-current we could check if the values are still zero and set them
> according to the buffer's db/sb type.  How does that sound?

> I checked the code and it turns out that ctx->Visual is _only_ used
> for initializing GL_DRAW_BUFFER and GL_READ_BUFFER.  If we change the
> initialization as I described, I think that we could get rid of
> ctx->Visual completely.

Sounds the most flexible solution to me.  But it would not be a risk-less change, given it touches all OSes and APIs, and it's still not clear to me if mixing single/double-buffer drawables/contexts is truly a case that real applications (or any tests for that matter) actually use, and therefore, if it's worth for us to even bother.  I'd say that go for it, if it makes sense on its own, e.g., if removing ctx->Visual would be a good clean-up on its own right from your POV. If not, I think we should simply refuse mixed single/double-buffering on all GLX/WGL/EGL/etc APIs.
 
Jose


More information about the mesa-dev mailing list