[Piglit] [PATCH v2 10/13] piglit-dispatch: Switch to using piglit-dispatch instead of GLEW.
Jose Fonseca
jfonseca at vmware.com
Tue Mar 20 07:11:41 PDT 2012
----- Original Message -----
> On 16 March 2012 16:51, Jose Fonseca < jfonseca at vmware.com > wrote:
> > ----- Original Message -----
>
> > > On 16 March 2012 12:04, Jose Fonseca < jfonseca at vmware.com >
> > > wrote:
>
> > > > ----- Original Message -----
>
> > >
>
> > > > > On 03/12/2012 02:41 PM, Paul Berry wrote:
>
> > >
>
> > > > > > Previous patches set up the piglit-dispatch infrastructure
> > > > > > but
>
> > > > > > did
>
> > >
>
> > > > > > not
>
> > >
>
> > > > > > use it. This patch enables piglit-dispatch and uses it
> > > > > > instead
>
> > > > > > of
>
> > >
>
> > > > > > GLEW.
>
> > >
>
> > > > > >
>
> > >
>
> > > > > > No piglit regressions on Intel SandyBridge (Linux) or Mac
> > > > > > OSX.
>
> > >
>
> > > > >
>
> > >
>
> > > > > Nice. Only one more platform to conquer.
>
> > >
>
> > > > >
>
> > >
>
> > > > > > #if defined(USE_OPENGL)
>
> > >
>
> > > > > > -# include "glew.h"
>
> > >
>
> > > > > > - /* Include the real headers too, in case GLEW misses
>
> > > > > > something.
>
> > >
>
> > > > > > */
>
> > >
>
> > > > > > +# include "piglit-dispatch.h"
>
> > >
>
> > > > > > + /* Include the real headers too, in case piglit-dispatch
>
> > > > > > misses
>
> > >
>
> > > > > > something. */
>
> > >
>
> > > > > > # ifdef __APPLE__
>
> > >
>
> > > > > > # include <OpenGL/gl.h>
>
> > >
>
> > > > > > # include <OpenGL/glu.h>
>
> > >
>
> > > > >
>
> > >
>
> > > > > Shouldn't Apple's <OpenGL/gl.h> be removed too?
>
> > >
>
> > > > > I think we discussed this before, but I don't remember the
>
> > >
>
> > > > > conclusion.
>
> > >
>
> > > > It's probably pointless if all GL defitions are done before.
>
> > >
>
> > > > But note that Apple's glext.h is non standard:
>
> > >
>
> > > > https://github.com/apitrace/apitrace/blob/master/thirdparty/khronos/GL/glext.patch
>
> > >
>
> > > > It's probably better to replicate this on piglit-dispatch too.
>
> > >
>
> > > > Jose
>
> > >
>
> > > Interesting: on my mac, GLhandleARB is defined as a void *. Your
>
> > > patch defines it as an unsigned long (which is equivalent to void
> > > *
>
> > > from an ABI perspective).
>
> > This was deliberate. As handling both "void *" would make the
> > apitrace code generation unnecessarily complex.
>
> > > GL defines it as an unsigned int, which is certainly *not*
> > > equivalent
>
> > > on 64-bit systems.
>
> > > This throws a bit of a monkey wrench into things, since it breaks
>
> > > some of the aliases defined in the official gl.spec file. for
>
> > > example GetAttachedObjectsARB is marked as an alias for
>
> > > GetAttachedShaders, but this can't possibly work on mac if one of
>
> > > them takes a GLhandleARB * as its 4th argument, and the other
> > > takes
>
> > > a GLuint * as its 4th argument (since the two types have
> > > different
>
> > > sizes).
>
> > Never noticed that gl.spec had aliases. I don't think that
> > information is very reliable.
>
> There were definitely some errors I had to correct (see patch 5/13:
> http://lists.freedesktop.org/archives/piglit/2012-March/002024.html
> ). But I suspect that I have found most of them, based on the fact
> that this patch series introduces no regressions on Sandy Bridge. If
> we do find some more errors, it will be trivial to fix them by
> adding or removing aliases declarations.
> > For example, gl.spec also marks ARB_framebuffer_object as aliases
> > of
> > EXT_framebuffer_object functions, but semantics are different, and
> > I
> > heard reports of segfaults on NVIDIA drivers due to inadvertedly
> > mixing up ARB_framebuffer_object and EXT_framebuffer_object
> > functions.
>
> Hmm, interesting. I spent a couple of hours with the specs this
> morning, and it seems to me that there are two key semantic
> differences:
> (1) if you pass BindFramebuffer() an object name that was not
> allocated by GenFramebuffers(), it is supposed to generate an
> INVALID_OPERATION error, whereas if you pass BindFramebufferEXT() an
> object name that was allocated by GenFramebuffersEXT(), it is
> supposed to go ahead and allocate that object name automatically (so
> there should be no error). A similar situation applies to
> BindRenderbuffer() and BindRenderbufferEXT(). Judging by the text of
> issue (7) in the ARB_framebuffer_object spec, and the fact that
> BindFramebuffer() and BindRenderbuffer() have distinct GLX opcodes
> from BindFramebufferEXT() and BindRenderbefferEXT(), it seems pretty
> clear that the intended behaviour of an implementation that supports
> both extensions is to behave according to EXT_framebuffer_object
> when the EXT functions are called, and behave according to
> ARB_framebuffer_object when the non-EXT functions are called.
> AFAICT, gl.spec *does* seem to properly account for this semantic
> difference: it marks all of the functions defined in both extensions
> as aliases *except* for BindFramebuffer and BindRenderbuffer, and it
> has comments referring to the parts of the specs that explain the
> difference.
> Interestingly enough, Mesa *does not* handle the semantic difference
> correctly. If the driver supports both extensions, Mesa implements
> the ARB_framebuffer_object behaviour regardless of which version of
> the function was called, and that could very well break some client
> programs. I'll investigate what my nVidia system does to confirm
> that I'm understanding things correctly, and assuming I am, I'll
> file a bug and add a Piglit test that demonstrates the problem.
> (2) EXT_framebuffer_object allows framebuffer objects to be shared
> between contexts, whereas ARB_framebuffer_object prohibits it. I
> don't have enough familiarity with how Mesa implements sharing
> between contexts to be able to tell off hand which spec Mesa
> follows, but it can't possibly follow both specs correctly, because
> it treats all of the functions as aliased between the two
> extensions.
> I'm curious whether nVidia drivers follow both specs correctly. If
> they do, then one way they might do it is to use separate internal
> data structures to implement the EXT entry points and the ARB entry
> points. If they do that, that could explain the segfaults you've
> heard about when mixing between EXT and ARB entry points.
> Are you aware of any other semantic differences between the two specs
> that would cause problems with aliasing, other than these two? It
> seems to me that these two are the important ones.
No, I'm not aware of other differences.
I got the feeling NVIDIA is using separate internal data structures. An advantage of ARB fbos is that, because they can't be shared by multiple context, they don't need multiple thread protection, hence are more light weight.
> > I think handling aliases under the hood causes more harm than good.
> > It's better that apps are well aware of what extensions they are
> > using.
>
> Can you help me enumerate the pros and cons? Here's what I can think
> of at the moment:
> PRO: handling aliases in piglit-dispatch saves us from having to
> write code that handles aliases manually (as we do in
> piglit-shader-gl.c and piglit-transform-feedback.c, as well as,
> IIRC, several one-off places in individual Piglit tests and in
> Glean). Autogenerating this code as part of piglit-dispatch reduces
> the risk of errors.
> PRO: handling aliases in piglit-dispatch guarantees that *all* piglit
> tests deal consistently with aliases, not just the ones where we
> remember to think about aliasing. For instance, when I started
> working on transform feedback late last year, I discovered that all
> of the transform feedback tests were written in terms of EXT
> functions, which made them unrunnable on implementations that
> supported GL 3.0 but not EXT_transform_feedback. I had to write
> piglit-transform-feedback.c in order for the transform feedback
> tests to run at all on my nVidia system (which doesn't advertise
> EXT_transform_feedback in its extension string due to a driver bug).
> PRO: handling aliases in piglit-dispatch allows us to use
> conventional function names instead of piglit-specific wrappers
> (e.g. we can write glShaderSource() rather than
> piglit_ShaderSource()). This makes tests easier to comprehend,
> especially for those familiar with GL programming but not to
> Piglit's intrinsics.
> CON: if an implementation provides two separate functions that Piglit
> thinks are aliases, then Piglit tests will only exercise one of the
> functions and not the other.
> CON: if an implementation has problems with mixed use of functions
> from multiple extensions or GL versions (as you've heard of
> happening on nVidia with framebuffer objects), then Piglit tests
> won't be able to avoid those problems by calling functions from just
> one extension or the other.
IMO the biggest CON is that, because specs and implementations vary in subtle ways, aliases cannot be trusted to be truly interchangeable. So a mechanism that fully abstracts aliases is a) introducing a hidden variability factor, b) makes it easy to forget to check the differences.
That said, I agree with you, we don't need to throw the baby with the water, and we can still have a code generation for handling aliases while still maintain close control over which extension is used, as some of your ideas achieve.
I'm happy to let you decide what you think it's best here.
> I think we could find creative ways to mitigate the downsides, and
> still get the benefits, by extending piglit-dispatch slightly. For
> example, we could modify piglit-dispatch so that when resolving an
> alias, it favours function names defined in extensions that have
> been previously passed to piglit_require_extension() (or function
> names defined in GL versions that have been previously passed to
> piglit_require_gl_version()). This would address the problems we've
> talked about with framebuffer_object because any test designed to
> specifically probe the behaviour of EXT_framebuffer_object would
> start with a call to
> piglit_require_extension("GL_EXT_framebuffer_object"), and then the
> rest of the test would consistently use the EXT function names. Note
> that we would have to establish a rule for how to resolve
> ambiguities if multiple aliases for a function match a previous call
> to piglit_require_* (perhaps we would go with the most recent
> matching call to a piglit_require_* function).
> A variation on that idea would be to modify piglit-dispatch so that
> it generates an error when trying to resolve a function that doesn't
> match a previous call to a piglit_require_* function. This would
> have the effect of forcing test writers to be explicit about what
> extension or GL version they are trying to test, which you might
> consider an advantage or a disadvantage depending on your point of
> view :). For those considering it a disadvantage, note that if you
> ever forget to call the appropriate piglit_require_* function, you
> would get very quick feedback in the form of a friendly error
> message the first time you ran the test, *regardless of whether the
> system you are running the test on supports the given extension/GL
> version*. This would be a lot better than the current situation,
> where you only notice a problem if you happen to run the test on a
> system that doesn't provide the extension or GL version you forgot
> to require.
> Another alternative would be to have some global parameters that
> determine, at run time, which sets of function names are favoured
> when resolving aliases. If these were command-line parameters, then
> it would be easy to arrange for all.tests to run a set of tests
> multiple times, once with each possible set of alias resolutions.
> That would give us *much* better coverage of aliases than we
> currently have in Piglit.
Yep. Ideally we'd have different tests (or parameterizations of one test) for each extension.
> Incidentally, some of us at Intel have been tossing around the idea
> of doing away with all.tests altogether, and instead annotating the
> individual .c files with specially-formatted comments indicating how
> each test should be run (much as we already do for GLSL parser
> tests). If we did that, then it seems like it would be fairly easy
> for every test to specify which combinations of GL versions and
> extensions it supports, and the piglit framework could automatically
> invoke it multiple times if necessary to test each set of aliased
> functions. If we went with this approach, it would have the
> additional advantage that the piglit framework could take
> responsibility for skipping tests that don't apply to the current
> implementation, rather than being forced to each of those tests
> individually and collect their "skip" results.
Sounds a very nice idea.
Jose
More information about the Piglit
mailing list