[Piglit] [PATCH v2 10/13] piglit-dispatch: Switch to using piglit-dispatch instead of GLEW.
Brian Paul
brianp at vmware.com
Mon Mar 19 07:53:40 PDT 2012
On 03/18/2012 03:08 PM, Paul Berry wrote:
> On 16 March 2012 16:51, Jose Fonseca <jfonseca at vmware.com
> <mailto:jfonseca at vmware.com>> wrote:
>
> ----- Original Message -----
>
> > On 16 March 2012 12:04, Jose Fonseca < jfonseca at vmware.com
> <mailto:jfonseca at vmware.com> > wrote:
>
> > > ----- Original Message -----
> >
> > > > On 03/12/2012 02:41 PM, Paul Berry wrote:
> >
> > > > > Previous patches set up the piglit-dispatch
> infrastructure but
> > > > > did
> >
> > > > > not
> >
> > > > > use it. This patch enables piglit-dispatch and uses it
> instead
> > > > > of
> >
> > > > > GLEW.
> >
> > > > >
> >
> > > > > No piglit regressions on Intel SandyBridge (Linux) or Mac
> OSX.
> >
> > > >
> >
> > > > Nice. Only one more platform to conquer.
> >
> > > >
> >
> > > > > #if defined(USE_OPENGL)
> >
> > > > > -# include "glew.h"
> >
> > > > > - /* Include the real headers too, in case GLEW misses
> > > > > something.
> >
> > > > > */
> >
> > > > > +# include "piglit-dispatch.h"
> >
> > > > > + /* Include the real headers too, in case piglit-dispatch
> > > > > misses
> >
> > > > > something. */
> >
> > > > > # ifdef __APPLE__
> >
> > > > > # include <OpenGL/gl.h>
> >
> > > > > # include <OpenGL/glu.h>
> >
> > > >
> >
> > > > Shouldn't Apple's <OpenGL/gl.h> be removed too?
> >
> > > > I think we discussed this before, but I don't remember the
> >
> > > > conclusion.
> >
>
> > > It's probably pointless if all GL defitions are done before.
> >
>
> > > But note that Apple's glext.h is non standard:
> >
>
> > >
> https://github.com/apitrace/apitrace/blob/master/thirdparty/khronos/GL/glext.patch
> >
>
> > > It's probably better to replicate this on piglit-dispatch too.
> >
>
> > > Jose
> >
>
> > Interesting: on my mac, GLhandleARB is defined as a void *. Your
> > patch defines it as an unsigned long (which is equivalent to void *
> > from an ABI perspective).
>
> This was deliberate. As handling both "void *" would make the
> apitrace code generation unnecessarily complex.
>
> > GL defines it as an unsigned int, which is certainly *not*
> equivalent
> > on 64-bit systems.
>
> > This throws a bit of a monkey wrench into things, since it breaks
> > some of the aliases defined in the official gl.spec file. for
> > example GetAttachedObjectsARB is marked as an alias for
> > GetAttachedShaders, but this can't possibly work on mac if one of
> > them takes a GLhandleARB * as its 4th argument, and the other takes
> > a GLuint * as its 4th argument (since the two types have different
> > sizes).
>
> Never noticed that gl.spec had aliases. I don't think that
> information is very reliable.
>
>
> There were definitely some errors I had to correct (see patch 5/13:
> http://lists.freedesktop.org/archives/piglit/2012-March/002024.html).
> But I suspect that I have found most of them, based on the fact that
> this patch series introduces no regressions on Sandy Bridge. If we do
> find some more errors, it will be trivial to fix them by adding or
> removing aliases declarations.
>
>
> For example, gl.spec also marks ARB_framebuffer_object as aliases
> of EXT_framebuffer_object functions, but semantics are different,
> and I heard reports of segfaults on NVIDIA drivers due to
> inadvertedly mixing up ARB_framebuffer_object and
> EXT_framebuffer_object functions.
>
>
> Hmm, interesting. I spent a couple of hours with the specs this
> morning, and it seems to me that there are two key semantic differences:
>
> (1) if you pass BindFramebuffer() an object name that was not
> allocated by GenFramebuffers(), it is supposed to generate an
> INVALID_OPERATION error, whereas if you pass BindFramebufferEXT() an
> object name that was allocated by GenFramebuffersEXT(), it is supposed
> to go ahead and allocate that object name automatically (so there
> should be no error). A similar situation applies to
> BindRenderbuffer() and BindRenderbufferEXT(). Judging by the text of
> issue (7) in the ARB_framebuffer_object spec, and the fact that
> BindFramebuffer() and BindRenderbuffer() have distinct GLX opcodes
> from BindFramebufferEXT() and BindRenderbefferEXT(), it seems pretty
> clear that the intended behaviour of an implementation that supports
> both extensions is to behave according to EXT_framebuffer_object when
> the EXT functions are called, and behave according to
> ARB_framebuffer_object when the non-EXT functions are called.
>
> AFAICT, gl.spec *does* seem to properly account for this semantic
> difference: it marks all of the functions defined in both extensions
> as aliases *except* for BindFramebuffer and BindRenderbuffer, and it
> has comments referring to the parts of the specs that explain the
> difference.
>
> Interestingly enough, Mesa *does not* handle the semantic difference
> correctly. If the driver supports both extensions, Mesa implements
> the ARB_framebuffer_object behaviour regardless of which version of
> the function was called, and that could very well break some client
> programs. I'll investigate what my nVidia system does to confirm that
> I'm understanding things correctly, and assuming I am, I'll file a bug
> and add a Piglit test that demonstrates the problem.
>
>
> (2) EXT_framebuffer_object allows framebuffer objects to be shared
> between contexts, whereas ARB_framebuffer_object prohibits it. I
> don't have enough familiarity with how Mesa implements sharing between
> contexts to be able to tell off hand which spec Mesa follows, but it
> can't possibly follow both specs correctly, because it treats all of
> the functions as aliased between the two extensions.
Mesa allows sharing FBOs/Renderbuffers between contexts, per EXT_fbo.
We could implement both shared and non-shared buffers for the EXT and
ARB extensions by putting the ID->Buffer hash tables both in the
shared state and the per-context state. But then all the FBO-related
entrypoints would have to be updated as well. In particular, the
Gen-, Delete-, Bind- and Is-Frame/Renderbuffer functions would all
need to be split into EXT and ARB versions.
I sometimes wonder if it would be best to simply only advertise one of
EXT_fbo or ARB_fbo at at time and not both.
> I'm curious whether nVidia drivers follow both specs correctly. If
> they do, then one way they might do it is to use separate internal
> data structures to implement the EXT entry points and the ARB entry
> points. If they do that, that could explain the segfaults you've
> heard about when mixing between EXT and ARB entry points.
It would be great if you can investigate this too when you look at the
BindFramebuffer() behaviour. That is, create two context that share
state and see if a FBO created in one context is visiable to the other
context using the ARB functions.
> Are you aware of any other semantic differences between the two specs
> that would cause problems with aliasing, other than these two? It
> seems to me that these two are the important ones.
I think that's it. There are other differences (like ARB_fbo not
requiring all renderbuffers be the same size) but they're not
pertinent to the API aliasing issue.
-Brian
More information about the Piglit
mailing list