[Intel-gfx] [PATCH i-g-t 00/16] Introduction of plotting support

Daniel Vetter daniel at ffwll.ch
Mon Jul 6 11:25:56 PDT 2015


On Mon, Jul 06, 2015 at 04:06:05PM +0100, Damien Lespiau wrote:
> On Mon, Jul 06, 2015 at 04:54:48PM +0200, Daniel Vetter wrote:
> > On Mon, Jul 06, 2015 at 01:35:28PM +0100, Damien Lespiau wrote:
> > > Long story short:
> > >   http://entropy.lespiau.name/intel-gpu-tools/test_simple_plot.png
> > >   http://entropy.lespiau.name/intel-gpu-tools/test_two_plots.png
> > > 
> > > I had fun with the previous week-end errand around igt_stats so did it again
> > > with plots, gearing up towards, some day, generating reports directly in the
> > > micro-bencharks. A couple of example of what can be generated:
> > > 
> > > This time, it's about plotting data. I started with some synthetic data but
> > > it's only a matter of time until I look at actual data (I'd like to start with
> > > some data upload micro-benchmarks). Anyway, here's what the API looks like
> > > (more can be found in lib/tests/igt_plot.c):
> > 
> > Hm, not sure we need a plotting api in igt - thus far the split between
> > igt and piglit was that igt is tests and piglit is running them,
> > collecting results and presenting them.
> 
> I understand this is borderline: "why not reuse something in the myriad
> of available tools". The idea is to be lessen the friction between
> benchmarks and renerating useful data and reports. We can still get the
> data out for further processing if needed, even from piglit if it ever
> grows something like this.
> 
> I really don't want to rely on python and piglit for anything else than
> the general test runner. Tools with a large dependency set are a pain on
> ChromeOS and Android, so they will be useless there.
> 
> > There's also microbenchmarks in other gfx related testsuites, hence I
> > think we should aim for something that's more generally useable.
> 
> Maybe, but then I don't know where those other micro benchmarks are,
> usually all over the place? What do you suggest?
> 
> I think I'll go ahead and try to prove the point by going the full route
> and generate a real micro-benchmark report. It should be easy enough to
> revert all of that if judged unnecessary.

atm QA rolls their own thing, developers on mesa side have ministat, and
it looks like you want to create something in igt. I know it's easier, but
I'd like to share as much tooling between QA and devs as possible. And
that kinda means we should have the same tooling across different gfx
components because otherwise QA gets pissed. Unfortunately we're not even
there yet with just running testcases, so microbenchmarks are really far
off. Just a word of warning really that we could end up running into a
valley here that we'll regret 2 years down the road (when QA is solid and
we have a problem with piles of different benchmarking suites).

But really this is is just grumpy maintainer fearing another long-term
headache, I don't want to stop your enthusiasm (too much at least).

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the Intel-gfx mailing list