[Mesa-dev] [PATCH] i965: Don't check for draw-time errors that cannot occur in core profile

Ian Romanick idr at freedesktop.org
Mon Aug 31 16:06:27 PDT 2015


ping. :)

On 08/10/2015 11:48 AM, Matt Turner wrote:
> On Mon, Aug 10, 2015 at 10:12 AM, Ian Romanick <idr at freedesktop.org> wrote:
>> From: Ian Romanick <ian.d.romanick at intel.com>
>>
>> On many CPU-limited applications, this is *the* hot path.  The idea is
>> to generate per-API versions of brw_draw_prims that elide some checks.
>> This patch removes render-mode and "is everything in VBOs" checks from
>> core-profile contexts.
>>
>> On my IVB laptop (which may have experienced thermal throttling):
>>
>> Gl32Batch7:     3.70955% +/- 1.11344%
> 
> I'm getting 3.18414% +/- 0.587956% (n=113) on my IVB, , which probably
> matches your numbers depending on your value of n.
> 
>> OglBatch7:      1.04398% +/- 0.772788%
> 
> I'm getting 1.15377% +/- 1.05898% (n=34) on my IVB, which probably
> matches your numbers depending on your value of n.

This is another thing that make me feel a little uncomfortable with the
way we've done performance measurements in the past.  If I run my test
before and after this patch for 121 iterations, which I have done, I can
cut the data at any point and oscillate between "no difference" or X%
+/- some-large-fraction-of-X%.  Since the before and after code for the
compatibility profile path should be identical, "no difference" is the
only believable result.

Using a higher confidence threshold (e.g., -c 98) results in "no
difference" throughout, as expected.  I feel like 90% isn't a tight
enough confidence interval for a lot of what we do, but I'm unsure how
to determine what confidence level we should use.  We could
experimentally determine it by running a test some number of times and
finding the interval that detects no change in some random partitioning
of the test results.  Ugh.

> I'll review soon.




More information about the mesa-dev mailing list