[Mesa-dev] [PATCH 0/4] improve buffer cache and reuse
Eero Tamminen
eero.t.tamminen at intel.com
Thu May 3 11:14:50 UTC 2018
Hi,
On 02.05.2018 21:19, James Xiong wrote:
> On Wed, 2 May 2018 14:18:21 +0300
> Eero Tamminen <eero.t.tamminen at intel.com> wrote:
[...]
>> You're missing information on:
>> * On which plaform you did the testing (affects variance)
>> * how many test rounds you ran, and
>> * what is your variance
>
> I ran these tests on a gen9 platform/ubuntu 17.10 LTS.
If it's TDP limited in 3D tests (like all NUC and e.g. Broxton devices
seem to be in long running tests), it has clearly higher variance than
non-TDP (or temperature) limited desktop platforms.
> Most of the tests
> are consistent, especially the memory usage. The only exception is
> GfxBench 4.0 gl_manhattan, I had to ran it 3 times and pick the highest
> one. I will apply this method to all tests and re-send with updated
> results.
(comments below are about FPS results, not memory usage.)
Performance of many GPU bound tests doesn't have normal Gaussian
distribution, but two (tight) peaks. On our BXT machines these peaks
are currently e.g. in GfxBench Manhattan tests *3%* apart from each
other.
While you can get results from both performance peaks, whether your
results fall onto either of these performance peaks, is more likely
to change between boots (I think due to alignment changes in kernel
memory allocations), than successive runs.
-> Your results may have less chance of being misleading, if you
don't reboot when switching between Mesa version with your patch
and one without.
Especially if you're running tests only on one machine (i.e. don't have
extra data from other machines against which you can correlate results),
I think you need more than 3 runs, both with and without your patch.
While max() can provide better comparison for this kind of bimodal
result distribution than avg(), you should still calculate and provide
variance for your data with your patches.
- Eero
More information about the mesa-dev
mailing list