[Intel-gfx] [Draft] Testing Requirements for drm/i915 Patches

Daniel Vetter daniel at ffwll.ch
Wed Oct 30 19:22:49 CET 2013


On Wed, Oct 30, 2013 at 6:30 PM, Jesse Barnes <jbarnes at virtuousgeek.org> wrote:
> On Tue, 29 Oct 2013 20:00:49 +0100
> Daniel Vetter <daniel at ffwll.ch> wrote:
>
>> Since the point here is to make the actual test requirements known up-front we
>> need to settle on clear expectations. Since this is the part that actually
>> matters in practice I'll really welcome close scrutiny and comments here.
>
> Another thought on these lines.
>
> The expectations come from more than just the test side of things, but
> also from the design of new features, or how code is refactored for a
> new platform or to fix a bug.
>
> So it may make sense, before starting big work, to propose both the
> tests required for the feature/refactor/bug fix, as well as the
> proposed design of the change.  That would let us get any potential
> test & feature bikeshedding out of the way before tons of time is
> invested in the code, hopefully saving a lot of rage, especially on the
> big features.

Yeah, that's actually very much the core idea I have. Using ppgtt as
an example I think a reasonable test coverage we could have discussed
upfront would have been:
- make sure all the slowpaths in execbuf are really covered with
faulting testcases (maybe even concurrent thrashing ones), iirc we've
still had a bunch of gaping holes there
- filling in testcases for specific corner cases we've had to fix in
the execbuf/gem code in the past (e.g. the gen6 ppgtt w/a)
- a generic "let's thrash the crap out of gem" test as a baseline,
which will be fleshed out while developing the patches to hit the
low-mem corner-cases when vma setup fails midway through an execbuf
somewhere.

Later on (and this actually happened since I've totally forgotten
about it) we'd discover that we need a testcase for the secure
dispatch stuff. But since we already had a few nice testcase to throw
crazy execbufs at the kernel that was just little work on top.

> Note that this runs contrary to a lot of other subsystems, which are
> sometimes allergic to up front design thinking, and prefer to churn
> through dozens of implementations to settle on something.  Both
> approaches have value, and some combination is probably best.  Some
> well thought out discussion up front, followed by testing, review, and
> adjustment of an actual implementation.

Yeah, I agree that we need more planing than what's the usual approach.

> Getting back to your points:
>   - tests must cover userspace interfaces
>   - tests need to provide coverage of internal driver state
>   - tests need to be written following review for any bugs caught in
>     review (I don't fully agree here; we can look at this on a case by
>     case basis; e.g. in some cases an additional BUG_ON or state check
>     may be all that's needed)

I guess I need to clarify this: Tests don't need to be special i-g-ts,
in-kernel self-checks are also good (and often the best approach). For
WARN_ONs we just need try to make them as orthogonal as possible to
the code itself (refcount checks are always great at that) to minimize
the chances of both the code and the check being broken in the same
way.

>   - test infrastructure and scope should increase over time
>
> I think we can discuss the above points as part of any new proposed
> feature or rework.  For reworks in particular, we can probably start to
> address some of the "technical debt" you mentioned, where the bar for a
> rework is relatively high, requiring additional tests and
> infrastructure scope.

Yeah I think having a upfront discussion about testing and what should
be done for big features/reworks would be really useful. Both to make
sure that we have an optimal set of tests (not too much or too little)
to get the patches merged as smoothly as possible, but also to evenly
distribute plugging test gaps for existing features.

Especially when people add new test infrastructure (like EDID
injection or pipe CRC support) we should be really lenient about the
coverage for new features. Otherwise doing the feature, the test
infrastructure _and_ all the tests for the feature is too much. So for
completely new test approaches I think it's more than good enough to
just deliver that plus a small proof-of-concept testcase - advancing
the testable scope itself is extermely valuable. Later on extension
work can then start to fill in the gaps.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch



More information about the Intel-gfx mailing list