[Intel-gfx] [PATCH i-g-t 00/16] Introduction of plotting support
Damien Lespiau
damien.lespiau at intel.com
Mon Jul 6 12:58:31 PDT 2015
On Mon, Jul 06, 2015 at 08:25:56PM +0200, Daniel Vetter wrote:
> atm QA rolls their own thing, developers on mesa side have ministat, and
> it looks like you want to create something in igt. I know it's easier, but
> I'd like to share as much tooling between QA and devs as possible. And
> that kinda means we should have the same tooling across different gfx
> components because otherwise QA gets pissed. Unfortunately we're not even
> there yet with just running testcases, so microbenchmarks are really far
> off. Just a word of warning really that we could end up running into a
> valley here that we'll regret 2 years down the road (when QA is solid and
> we have a problem with piles of different benchmarking suites).
>
> But really this is is just grumpy maintainer fearing another long-term
> headache, I don't want to stop your enthusiasm (too much at least).
I'd be wary of tooling outside of igt or piglit. I asked on #intel-gfx
and there's nothing like a base set of tools for mirco-benchmarking.
Actually there are fairly few micro benchmarks at all (it's debatable
that micro-benchmarks make people focus on details that aren't relevant
in the big picture, the hope here is that they can help mesa to make
performance trade offs).
- Just to be clear, I don't care about anything closed source :)
- ministat.c is definitely useful but also limited to what it can do.
It's also external to benchmarks, needs wrapper scripts to be fully
operational (and several all those exists).
- Do you know if QA doing anything on benchmarking low level stuff
today? if so I'd love to talk to them. No external things please (like
the test blacklist being external to i-g-t).
- If your objection is being able to have benchmark results unified
between piglit and i-g-t, we can always generate a similar format for
the key metric we want to monitor. More detailed analysis will require
more than just that though. I also don't think anything like this
exists in piglit today.
- It's not just about stats and plots. It's also about:
* collecting metrics (CPU and GPU side), which mean integration with
perf.
* collecting other interesting data like memory bandwith available to
the CPU and GPU and compare them against theoritical maxima.
* Generating useful reports
So I'm not finished :)
You forgot the Finland perf team in your tour of people looking around
that area. I think I need to talk to them.
Why don't we have some of those benchmarks they have in i-g-t? (using
OpenGL? they are not open source?) I have the feeling we should at least
have a single point of contribution, let's make sure it's i-g-t when
it's about low level tests? Do we want to start accepting tests written
in OpenGL in i-g-t?
So many '?'s!
--
Damien
More information about the Intel-gfx
mailing list