[Intel-gfx] [PATCH i-g-t 00/16] Introduction of plotting support
Daniel Vetter
daniel at ffwll.ch
Mon Jul 6 14:32:47 PDT 2015
On Mon, Jul 06, 2015 at 08:58:31PM +0100, Damien Lespiau wrote:
> On Mon, Jul 06, 2015 at 08:25:56PM +0200, Daniel Vetter wrote:
> > atm QA rolls their own thing, developers on mesa side have ministat, and
> > it looks like you want to create something in igt. I know it's easier, but
> > I'd like to share as much tooling between QA and devs as possible. And
> > that kinda means we should have the same tooling across different gfx
> > components because otherwise QA gets pissed. Unfortunately we're not even
> > there yet with just running testcases, so microbenchmarks are really far
> > off. Just a word of warning really that we could end up running into a
> > valley here that we'll regret 2 years down the road (when QA is solid and
> > we have a problem with piles of different benchmarking suites).
> >
> > But really this is is just grumpy maintainer fearing another long-term
> > headache, I don't want to stop your enthusiasm (too much at least).
>
> I'd be wary of tooling outside of igt or piglit. I asked on #intel-gfx
> and there's nothing like a base set of tools for mirco-benchmarking.
> Actually there are fairly few micro benchmarks at all (it's debatable
> that micro-benchmarks make people focus on details that aren't relevant
> in the big picture, the hope here is that they can help mesa to make
> performance trade offs).
>
> - Just to be clear, I don't care about anything closed source :)
>
> - ministat.c is definitely useful but also limited to what it can do.
> It's also external to benchmarks, needs wrapper scripts to be fully
> operational (and several all those exists).
>
> - Do you know if QA doing anything on benchmarking low level stuff
> today? if so I'd love to talk to them. No external things please (like
> the test blacklist being external to i-g-t).
They have a few metrics they chase in manual tests, like suspend/resume
time or driver load. If we'd have some tiny little benchmarking for small
tests we could move that into igt too.
> - If your objection is being able to have benchmark results unified
> between piglit and i-g-t, we can always generate a similar format for
> the key metric we want to monitor. More detailed analysis will require
> more than just that though. I also don't think anything like this
> exists in piglit today.
Yeah nothing there yet.
> - It's not just about stats and plots. It's also about:
> * collecting metrics (CPU and GPU side), which mean integration with
> perf.
> * collecting other interesting data like memory bandwith available to
> the CPU and GPU and compare them against theoritical maxima.
> * Generating useful reports
Yeah if you use this for intel-gpu-tools then I think it's all justified
and lets move on. But it sounded like you wanted to use this for tests,
and there I see a bit of duplication going on ...
>
> So I'm not finished :)
>
> You forgot the Finland perf team in your tour of people looking around
> that area. I think I need to talk to them.
I did think of them, they have their own tooling.
> Why don't we have some of those benchmarks they have in i-g-t? (using
> OpenGL? they are not open source?) I have the feeling we should at least
> have a single point of contribution, let's make sure it's i-g-t when
> it's about low level tests? Do we want to start accepting tests written
> in OpenGL in i-g-t?
Atm I think it's just the various *marks and other stuff they have and
run. But the idea would be that if we have wrappers it'd be easier to run
them all for developers.
Anyway really just wanted to start a bit a discussion here. It's imo a gap
we have in the open source gfx testing world, and thus far "random pile of
scripts somewhere" seems to be the best we managed.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the Intel-gfx
mailing list