[Piglit] Suggestions for creating statistical information from piglit results
Timothy Arceri
t_arceri at yahoo.com.au
Wed May 14 14:27:52 PDT 2014
On Wed, 2014-05-14 at 10:23 -0400, Ilia Mirkin wrote:
> On Wed, May 14, 2014 at 3:11 AM, Timothy Arceri <t_arceri at yahoo.com.au> wrote:
> > After Valve's Rich Geldreich blasting of OpenGL drivers and reading this
> > post [2] which trys to compare the driver quality of different drivers
> > using the g-truc samples [3]
> > I'm now interested in creating a similar graph and html page with a
> > breakdown of pass/fail into gl version categories similar to the pdf
> > provided on the website. This would provide some good stats on the real
> > quality of the drivers given piglits wide range of tests.
> >
> > My question to those more knowledgeable about the inner working of
> > piglit is what is the easiest way to create this? Obviously I need to
> > add some functionality to the summary creation tool. But I also need a
> > way to categories the tests into there respective gl version, glsl
> > version and/or extension something thats not currently output to the
> > results file. How would you suggest adding/extracting that information?
> > I guess adding three new fields to the results might be useful:
> > "gl-version":
> > "glsl-vesion":
> > "extension"
> >
> > But I'm not sure how I should extract that information and where in the
> > code this should be implemented? Is this even possible? I assume I would
> > at least have to have a table on the summary generation side to be able
> > to map the test to the correct category as if say the require version
> > for the test was used to work out the gl-version that wouldn't
> > automatically mean that extension was part of that gl version.
> >
> > Anyway let me know your thoughts.
>
> You can look at the test names reported by piglit. Almost all tests
> follow a reasonable convention. For example test names might be
>
> spec/!OpenGL 1.1/depthstencil-default_fb-blit samples=2
> spec/ARB_depth_texture/fbo-generatemipmap-formats/GL_DEPTH_COMPONENT
> spec/glsl-1.50/execution/fragcoord-layout-qualifiers-conflicting-case-1
>
> With just a handful of rules, you could classify the majority of
> tests. A bunch will be tricky, like the glean tests, I just wouldn't
> worry about them. Of course some extensions are also part of some gl
> versions, so you'd need a mapping of those. You can take a look at the
> one I created in
> http://people.freedesktop.org/~imirkin/glxinfo/glxinfo.js for an
> unrelated reason.
I did think about just using the test names it should work for most of
the tests in the spec directory I was just worried about missing the
ones that are just lumped in tests other sub directories. I guess for a
first attempt it should be ok to just put these test in an other
category with the glean tests.
Also thanks for the link that should come in handy.
>
> Unrelatedly, note that e.g. the NVIDIA blob fails a *ton* of tests for
> reasons completely unrelated to the test. E.g. they don't like the
> #version in combination with something, their parser for that version
> doesn't work properly, etc.
This is still good information for pointing out the differences in
drivers. I will be sure to note this fact when publishing my findings.
Thanks for the tip. It would be interesting if it could be figured out
if this were the case and display something other than fail. Anyway this
is kind of the point of creating this type of information it might
encourage some way to resolve this type of issue.
>
> -ilia
More information about the Piglit
mailing list