[Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

Marek Olšák maraeo at gmail.com
Fri May 27 10:21:44 UTC 2016

On Fri, May 27, 2016 at 3:18 AM, Mark Janes <mark.a.janes at intel.com> wrote:
> Marek Olšák <maraeo at gmail.com> writes:
>> On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker <baker.dylan.c at gmail.com> wrote:
>>> Quoting Marek Olšák (2016-04-16 15:16:34)
>>>> Hi,
>>>> This makes shader_runner very fast. The expected result is 40%
>>>> decrease in quick.py running time, or a 12x faster piglit run if you
>>>> run shader tests alone.
>>>> Branch:
>>>> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
>>>> Changes:
>>>> 1) Any number of test files can be specified as command-line
>>>> parameters. Those command lines can be insanely long.
>>>> 2) shader_runner can re-create the window & GL context if test
>>>> requirements demand different settings when going from one test to
>>>> another.
>>>> 3) all.py generates one shader_runner instance per group of tests
>>>> (usually one or two directories - tests and generated_tests).
>>>> Individual tests are reported as subtests.
>>>> The shader_runner part is done. The python part needs more work.
>>>> What's missing:
>>>> Handling of crashes. If shader_runner crashes:
>>>> - The crash is not shown in piglit results (other tests with subtests
>>>> already have the same behavior)
>>>> - The remaining tests will not be run.
>>>> The ShaderTest python class has the list of all files and should be
>>>> able to catch a crash, check how many test results have been written,
>>>> and restart shader_runner with the remaining tests.
>>>> shader_runner prints TEST %i: and then the subtest result. %i is the
>>>> i-th file in the list. Python can parse that and re-run shader_runner
>>>> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
>>>> -> crash; %i+1.. -> run again)
>>>> I'm by no means a python expert, so here's an alternative solution (for me):
>>>> - Catch crash signals in shader_runner.
>>>> - In the single handler, re-run shader_runner with the remaining tests.
>>>> Opinions welcome,
> Per-test process isolation is a key feature of Piglit that the Intel CI
> relies upon.  If non-crash errors bleed into separate tests, results
> will be unusable.
> In fact, we wrap all other test suites in piglit primarily to provide
> them with per-test process isolation.
> For limiting test run-time, we shard tests into groups and run them on
> parallel systems.  Currently this is only supported by dEQP features,
> but it can make test time arbitrarily low if you have adequate hardware.
> For test suites that don't support sharding, I think it would be useful
> to generate suites from start/end times that can run the maximal set of
> tests in the targeted duration.
> I would be worried by complex handling of crashes.  It would be
> preferable if separate suites were available to run with/without shader
> runner process isolation.
> Users desiring faster execution can spend the saved time figuring out
> which test crashed.

I would say that the majority of upstream users care more about piglit
running time and less about process isolation.

Process isolation can be an optional piglit flag.

>>>> Marek
>>>> _______________________________________________
>>>> Piglit mailing list
>>>> Piglit at lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/piglit
>>> Thanks for working on this Marek,
>>> This has been discussed here several times amongst the intel group, and
>>> the recurring problem to solve is crashing. I don't have a strong
>>> opinion on python vs catching a fail in the signal handler, except that
>>> handling in the python might be more robust, but I'm not really familiar
>>> with what a C signal handler can recover from, so it may not.
>> I can catch signals like exceptions and report 'crash'. Then I can
>> open a new process from the handler to run the remaining tests, wait
>> and exit.
> Will an intermittent crash be run again until it passes?



More information about the Piglit mailing list