[Intel-gfx] [ANNOUNCEMENT] Documenting tests with igt_describe()

Arkadiusz Hiler arkadiusz.hiler at intel.com
Tue Nov 19 13:02:06 UTC 2019


On Tue, Nov 19, 2019 at 02:37:17PM +0200, Lionel Landwerlin wrote:
> On 08/11/2019 11:04, Arkadiusz Hiler wrote:
> > On Thu, Nov 07, 2019 at 09:09:34PM +0000, Chris Wilson wrote:
> > > Quoting Arkadiusz Hiler (2019-11-07 17:38:20)
> > > > We don't want you to translate C into English, we want you to provide a bit of
> > > > that extra information that you would have put in the comments anyway.
> > > The comments should exist and are _inline_ with the code.
> > And I encourage doing so. Detailed comments what this particular
> > non-obvious chunk of code is doing are always welcome.
> > 
> > > In all the examples of igt_describe() I have seen, it is nowhere near
> > > the code so is useless; the information conveyed does not assist
> > > anyone in diagnosing or debugging the problem, so I yet to understand
> > > how it helps.
> > They are supposed to document not the implementation but what is the
> > grand idea behind a given subtest, so they are on a subtest level (or a
> > group of subtests), which is our basic testing unit - this is what fails
> > in CI, this is what you execute locally to reproduce the issue.
> > 
> > Can you truly debug an issue or understand what the failure means
> > without knowing what the test is supposed to prove?
> > 
> > A lot of people here have struggled with this and often it takes a lot
> > of time and requires advanced archeology with `git blame` hoping that
> > there is at least one detailed enough commit message in there.
> > 
> > If not then talking to people and relying on our tribal knowledge is
> > required.
> > 
> > As I have mentioned - getting the bigger picture from implementation
> > details is hard. I get that you are not affected by this, but please do
> > not deny the others.
> 
> 
> I kind of agree with Chris that I don't find that additional macro useful
> from the point of view of reading a particular test.
> 
> A comment above the test function seems more appropriate, at least you don't
> have to look at 2 different places to find out about a particular test.

There is no easy way of automated enforcing such comments unless we want
to fork a tool like doxygen or write and maintain something on our own.

You don't have to go to two places, you can organize the comments however
you like, see what kms_chamelium is doing:

https://gitlab.freedesktop.org/drm/igt-gpu-tools/blob/master/tests/kms_chamelium.c#L456

You can even use a format string and igt_describe_f() in a loop
or just slap it over whole igt_subtest_group().

Once again - this is the only way we have found to make this enforcible,
and give us enough of flexibility to shuffle the text around.

> Unless there is some untold goal here, like producing some kind of report in
> an automated way.
> 
> 
> -Lionel

The goal is to have those descriptions in the first place and make them
more accessible to people. You have to keep in mind that we have
decently sized organization, people are coming and going. Not everyone
becomes a seasoned kernel developer day one and not everyone looking at
the tests and their results is a developer.

There are  some extra benefit that I see that is related to "automated
report" you have mentioned but it was not the main reason: you can put
the description of the tests with bugs filed with the CI.

There is probably more ways in which we could expose that data.

-- 
Cheers,
Arek

> > > What is more useful, a link to the kernel coverage of the test and
> > > link to the test source code, or waffle?
> > > -Chris
> > Those things are not exclusive. Coverage is extremely useful metric,
> > source code is where the action happens but some higher-level
> > explanations and waffles can coexist peacefully and make many lives much
> > more pleasant.
> > 
> 


More information about the Intel-gfx mailing list