[Piglit] Crashing tests are not shown in the HTML summary

Ilia Mirkin imirkin at alum.mit.edu
Wed Apr 9 09:54:17 PDT 2014


On Tue, Apr 8, 2014 at 1:16 AM, Dylan Baker <baker.dylan.c at gmail.com> wrote:
> On Monday, April 07, 2014 10:48:10 PM Ilia Mirkin wrote:
>> On Mon, Apr 7, 2014 at 7:49 PM, Dylan Baker <baker.dylan.c at gmail.com> wrote:
>> > Actually, branches do have a pass fail status, it's the worst status of
>> > that branch's children.
>> >
>> > I've been thinking of using a Tree structure to represent the tests in
>> > Summary. The problem with the current architecture (as it pertains to this
>> > problem) is that the the statuses of each group are auto-generated from
>> > the
>> > 'worst' status in their children, and the only way to set a group status
>> > is
>> > after the entire tree has been generated.
>> >
>> > If you write a patch to add a fake subtest I'll be sure to look at it.
>>
>> It's actually worse than I thought. It's not a problem generating the
>> summary, it's a problem generating the results file in the first
>> place.
>>
>> This logic is in Test.execute:
>>
>>             if 'subtest' in result and len(result['subtest']) > 1:
>>                 for test in result['subtest']:
>>                     result['result'] = result['subtest'][test]
>>                     json_writer.write_dict_item(os.path.join(path,
>> test), result)
>
> If there is only one subtest the subtest is written out and it's presented in
> summary with the subtest status as the test status. at least, that was the
> reasoning.

I don't see the logic that does that... although perhaps the output
parser takes care of it somehow? Should be easy to construct a sample
output that emits a single subtest (e.g. 'fail') and then have the
test exit. (Although I guess the expectation is that this would never
happen -- there would always be an overall status printed as well
which would correspond to the subtest's status? That may be wishful
thinking...)

>
>>
>> (why is it > 1 and not > 0? whatever.) In any case, this ensures that
>> we don't even have the right stuff in the result, so no amount of code
>> changes to summary can fix it. I thought of throwing in a comparison
>> of the statuses, but realized they're strings. Would it be an API
>> abuse to import framework.status and parse them here?
>
> I don't think so. I did that originally, but decided that since we didn't need
> to  there wasn't a lot of reason to do a worthless string > object conversion.

So I realized shortly after that this code is actually fine -- it
needs to emit the n subtests, and it just uses the result['result']
override for code simplification. That's all fine. The problem is that
the overall result is just never emitted.

So one potential hack is to add an emit for a "Overall" fake test
before that logic runs if result['result'] == crash. This would
magically make everything work. The "real" fix is to encode this
information in some intelligent way. I can do the former ~tonight, but
I'm not signing up for the latter. Dylan, perhaps that's more your cup
of tea? Assuming that the "real" fix doesn't materialize soon, I do
think that throwing the hack in is a good compromise for now.

  -ilia


More information about the Piglit mailing list