[Intel-gfx] [ANNOUNCE] xf86-video-intel 2.8.0

Eric Anholt eric at anholt.net
Mon Aug 3 18:11:52 CEST 2009


On Fri, 2009-07-31 at 17:25 -0700, Alan W. Irwin wrote:
> On 2009-07-31 19:12-0400 Ben Gamari wrote:
> 
> > On Fri, Jul 31, 2009 at 12:53:55PM -0700, Alan W. Irwin wrote:
> >> Carl said:
> >>> What I see there is lots of gtkperf microbenchmarks, which as I put
> >>> forth in the blog report, don't capture realistic application behavior.
> >>> So there may or may not be any real performance problem based on those
> >>> numbers. It's really hard to know.
> >>
> >> Carl, it's a shame to see this disconnect between the Phoronix test suite
> >> results and what you would like to see tested for 2D graphics.  Blogging
> >> about the issue is too easily ignored. Thus, my opinion is it is long past
> >> time for you or someone else from Intel (since Intel's graphics reputation
> >> is on the line) to directly contact Michael Larabel to help him put together
> >> some more realistic 2D tests.
> >
> > This is one of my issues with Phoronix. If someone wishes to put up a
> > open source news site including benchmarking, that is fine. However, for
> > the good of the community, they should at least take the time to
> > understand the basic principles behind reproducible benchmarking.
> > Several of the benchmarks I have seen on Phoronix have had fundamental
> > methodological flaws. I have seen several people point out these flaws
> > article comments, yet I have yet to see a single substantial change in
> > testing methodology. I fail to understand why people still listen to
> > Phoronix's reporting.
> >
> > In sum, it is not Intel's responsibility to teach Mr. Larabel about the
> > limitations of microbenchmarks (even if it might do their brand some
> > good).
> 
> I would agree with you if the PTS were proprietary, but it is not.  My sense
> is the PTS provides a framework for any tests users or companies want to put
> in there.  I further agree with you it is likely some bad tests have gotten
> in, but the Intel guy's are free to improve that situation.  They are also
> free to follow your advice and refuse to add decent test to the PTS which
> then perpetuates the current situation which I don't think anybody likes.
> 
> >
> >> I believe headlines like today's from Phoronix, "Intel Linux Graphics
> >> On Ubuntu Still Flaky" are completely legitimate (and not snarky or
> >> overly negative) based on Michael Larabel's results with PTS. However,
> >> if there is an obvious way to change PTS (which is open source after
> >> all) to give more realistic 2D testing, then that would do Intel's
> >> graphics reputation a lot of much-needed good (assuming your
> >> hypothesis is correct).
> >>
> >> Also, for another way to turn around the recent bad publicity, why doesn't
> >> Intel run PTS results of their own and publish the 2D and 3D results
> >> (including the exact graphics software stack you used which might well
> >> differ from Ubuntu's) for each of your quarterly releases for your fastest
> >> graphical chipset?
> >
> > Testing the fastest chipset would produce useless data --- comparing
> > releases on different hardware is stamp-collecting at best and just
> > plain misleading at worst. Personally, I would much rather see the Intel
> > folks working on actually improving the codebase. Sure, some benchmarking
> > is required to do this, but spending developer time on benchmarking just
> > for publicity seems like a poor allocation of resources.
> 
> I also prefer Intel developers to spend most of their time on developing.
> However, I also agree with you that benchmarking is part of that development
> deal.  Where we differ is you seem to think it would take lots of developer
> time to set up an automated system for both the benchmarking and publishing
> the benchmark results.  Whoever is right on that, the important point is it
> is essentially only a one-time cost followed by very small on-going
> maintenance costs (if the automated benchmarking/publishing system were set
> up right).

We already have somebody doing regular regression testing using
appropriate test suites internally, and reporting regressions to us.
We've used it to fix several regressions.  But "small on-going
maintenance costs" is wrong -- automated systems are nice, but it turns
out that it takes a lot of care to keep these systems running and
analyze results, and us developers are quite glad that we don't have to
spend our days working on maintaining it (as idr found when he was
running the "automated" oglconform testing for a while).

-- 
Eric Anholt
eric at anholt.net                         eric.anholt at intel.com


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.freedesktop.org/archives/intel-gfx/attachments/20090803/98eab72b/attachment.sig>


More information about the Intel-gfx mailing list