[Intel-gfx] The whole round of i-g-t testing cost too long running time

Daniel Vetter daniel.vetter at intel.com
Tue Apr 15 19:17:59 CEST 2014


On 15/04/2014 17:46, Yang, Guang A wrote:
>
> Hi all,
>
> I have discussed with Daniel about the running time for each cases
> before and we set the standard as 10M, if one can’t finish after
> running 10M we will see it as Timeout and report bug on FDO(such as :
> Bug 77474 <https://bugs.freedesktop.org/show_bug.cgi?id=77474> -
> [PNV/IVB/HSW]igt/gem_tiled_swapping is slow and Bug 77475
> <https://bugs.freedesktop.org/show_bug.cgi?id=77475> -
> [PNV/IVB/HSW]igt//kms_pipe_crc_basic/read-crc-pipe-A is slow)
>
> Now the true status is that i-g-t have more than 650+ subcases,
> running a whole round of testing will cost such a long time on QA
> side(*beside that Timeout cases*), QA also need to spend more time to
> analysis the result changing on each platforms.
>
> You can find an example with this
> page:http://tinderbox.sh.intel.com/PRTS_UI/prtsresult.php?task_id=2778
> for how long one testing round cost.
>
> With the table of subtask:10831 on the page which for i-g-t test cases
> on BDW. Testing start at 19:16 PM and finished at 03:25 AM the next
> day, cost about *8 hours* to run 638 test cases.
>
> Each cases finished less than 10M as we expect, but the full time it
> too large, especially the BDW is the powerful machine on our side, ILK
> or PNV may take more than *10 hours*. We not only run i-g-t but also
> need to test the piglit/performance/media which already need time.
>
> Do we have any solutions to reduce the running time for whole i-g-t?
> it’s a pressing problem for QA after seeing the i-g-t case count
> enhance from 50 ->600+.
>
Ok there are a few cases where we can indeed make tests faster, but it
will be work for us. And that won't really speed up much since we're
adding piles more testcases at a pretty quick rate. And many of these
new testcases are CRC based, so inheritely take some time to run.

So I think longer-term we simply need to throw more machines at the
problem and run testcases in parallel on identical machines.

Wrt analyzing issues I think the right approach for moving forward is:
a) switch to piglit to run tests, not just enumerate them. This will
allow QA and developers to share testcase analysis.
b) add automated analysis for time-consuming and error prone cases like
dmesg warnings and backtraces. Thomas&I have just discussed a few ideas
in this are in our 1:1 today.

Reducing the set of igt tests we run is imo pointless: The goal of igt
is to hit corner-cases, arbitrarily selecting which kinds of
corner-cases we test just means that we have a nice illusion about our
test coverage.

Adding more people to the discussion.

Cheers, Daniel
Intel Semiconductor AG
Registered No. 020.30.913.786-7
Registered Office: Badenerstrasse 549, 8048 Zurich, Switzerland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/intel-gfx/attachments/20140415/a55bd07d/attachment.html>


More information about the Intel-gfx mailing list