[Intel-gfx] [Draft] Testing Requirements for drm/i915 Patches
Jesse Barnes
jbarnes at virtuousgeek.org
Wed Oct 30 00:38:53 CET 2013
Since a number of people internally are also involved in i915
development but not on the mailing list, I think we'll need to have an
internal meeting or two to cover this stuff and get buy in.
Overall, developing tests along with code is a good goal. A few
comments below.
On Tue, 29 Oct 2013 20:00:49 +0100
Daniel Vetter <daniel at ffwll.ch> wrote:
> - Tests must fully cover userspace interfaces. By this I mean exercising all the
[snip]
> - Tests need to provide a reasonable baseline coverage of the internal driver
> state. The idea here isn't to aim for full coverage, that's an impossible and
[snip]
What you've described here is basically full validation. Something
that most groups at Intel have large teams to dedicate all their time
to. I'm not sure how far we can go down this path with just the
development resources we have today (though maybe we'll get some help
from the validation teams in the product groups)
> Finally the short lists of excuses that don't count as proper test coverage for
> a feature.
>
> - Manual testing. We are ridiculously limited on our QA manpower. Every time we
> drop something onto the "manual testing" plate something else _will_ drop off.
> Which means in the end that we don't really have any test coverage. So
> if patches don't come with automated tests, in-kernel cross-checking or
> some other form of validation attached they need to have really good
> reasons for doing so.
Some things are only testable manually at this point, since we don't
have a sophisticated webcam structure set up for everything (and in
fact, the webcam tests we do have are fairly manual at this point, in
that they have to be set up specially each time).
> - Testing by product teams. The entire point of Intel OTC's "upstream first"
> strategy is to have a common codebase for everyone. If we break product trees
> every time we feed an update into them because we can't properly regression
> test a given feature then the value of upstreaming features is greatly
> diminished in my opinion and could potentially doom collaborations with
> product teams. We just can't have that.
>
> This means that when products teams submit patches upstream they also need
> to submit the relevant testcases to i-g-t.
So what I'm hearing here is that even if someone submits a tested
patch, with tests available (and passing) somewhere other than i-g-t,
you'll reject them until they port/write a new test for i-g-t. Is that
what you meant? I think a more reasonable criteria would be that tests
from non-i-g-t test suites are available and run by our QA, or run
against upstream kernels by groups other than our QA. That should keep
a lid on regressions just as well.
One thing you didn't mention here is that our test suite is starting to
see as much churn (and more breakage) than upstream. If you look at
recent results from QA, you'd think SNB was totally broken based on
i-g-t results. But despite that, desktops come up fine and things
generally work. So we need to take care that our tests are simple and
that our test library code doesn't see massive churn causing false
positive breakage all the time. In other words, tests are just as
likely to be broken (reporting false breakage or false passing) as the
code they're testing. The best way to avoid that is to keep the tests
very small, simple, and targeted. Converting and refactoring code in
i-g-t to allow that will be a big chunk of work.
--
Jesse Barnes, Intel Open Source Technology Center
More information about the Intel-gfx
mailing list