[igt-dev] [PATCH i-g-t v3 1/2] tests/intel-ci: Add basic PSR2 tests to fast feedback test list

Tomi Sarvela tomi.p.sarvela at intel.com
Fri Jan 25 11:27:42 UTC 2019


On 1/25/19 1:03 PM, Martin Peres wrote:
> On 25/01/2019 11:45, Daniel Vetter wrote:
>> On Thu, Jan 24, 2019 at 02:11:30PM -0800, Rodrigo Vivi wrote:
>>> On Thu, Jan 24, 2019 at 01:55:41PM +0100, Daniel Vetter wrote:
>>>> On Wed, Jan 23, 2019 at 09:17:17AM -0800, Rodrigo Vivi wrote:
>>>>> On Wed, Jan 23, 2019 at 05:51:11PM +0100, Daniel Vetter wrote:
>>>>>> On Wed, Jan 23, 2019 at 5:45 PM Rodrigo Vivi <rodrigo.vivi at intel.com> wrote:
>>>>>>>
>>>>>>> On Wed, Jan 23, 2019 at 01:07:32PM +0100, Daniel Vetter wrote:
>>>>>>>> On Wed, Jan 23, 2019 at 01:37:19PM +0200, Petri Latvala wrote:
>>>>>>>>> On Tue, Jan 22, 2019 at 05:09:49PM -0800, José Roberto de Souza wrote:
>>>>>>>>>> Lets run the same PSR1 basic tests for PSR2 to caught PSR2
>>>>>>>>>> regressions faster.
>>>>>>>>>>
>>>>>>>>>> Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
>>>>>>>>>> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan at intel.com>
>>>>>>>>>> Signed-off-by: José Roberto de Souza <jose.souza at intel.com>
>>>>>>>>>> ---
>>>>>>>>>>   tests/intel-ci/fast-feedback.testlist | 4 ++++
>>>>>>>>>>   1 file changed, 4 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
>>>>>>>>>> index da3c4c8e..e48cb8a5 100644
>>>>>>>>>> --- a/tests/intel-ci/fast-feedback.testlist
>>>>>>>>>> +++ b/tests/intel-ci/fast-feedback.testlist
>>>>>>>>>> @@ -227,6 +227,10 @@ igt at kms_psr@primary_page_flip
>>>>>>>>>>   igt at kms_psr@cursor_plane_move
>>>>>>>>>>   igt at kms_psr@sprite_plane_onoff
>>>>>>>>>>   igt at kms_psr@primary_mmap_gtt
>>>>>>>>>> +igt at kms_psr@psr2_primary_page_flip
>>>>>>>>>> +igt at kms_psr@psr2_cursor_plane_move
>>>>>>>>>> +igt at kms_psr@psr2_sprite_plane_onoff
>>>>>>>>>> +igt at kms_psr@psr2_primary_mmap_gtt
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The BAT results mail said success because these are new tests, but do
>>>>>>>>> note that they failed. They must pass to get onto the BAT list.
>>>>>>>>
>>>>>>>> Also, adding all kinds of tests to BAT to validate features doesn't scale.
>>>>>>>> We need some way to run these tests on specific machines as part of the
>>>>>>>> follow-up shard runs ... Otherwise we're stuck with a huge pressure to add
>>>>>>>> all kinds of super-important-feature-right-now things to BAT.
>>>>>>>
>>>>>>> I understand and I agree with your point. But on this very specific case
>>>>>>> no shard have PSR1 or PSR2 panels.
>>>>>>
>>>>>> Yeah. Same way that no shard has:
>>>>>> -mst
>>>>>> -hdcp
>>>>>> -dsi
>>>>>> -4k
>>>>>> - ...
>>>>>
>>>>> "coincidentally" all display related :-)
>>>>>
>>>>>>
>>>>>> The list is very long. Everyone wants their feature to be an
>>>>>> exception. Everyone's feature only increase test time by "not much".
>>>>>
>>>>> Yeap, I understand that everybody will put their feature as important,
>>>>> but for me another factor that justify that increase is the "fragile"
>>>>> part.
>>>>>
>>>>> For me the important + fragile deserves a space even if we have to wait
>>>>> minutes more for the result :/
>>>>>
>>>>>>
>>>>>>> Also this shouldn't increase the test time much, because machines with PSR1 are
>>>>>>> already running the PSR1 tests only, machines without PSR are not running
>>>>>>> anything and machines. Only machines with PSR2 panels that are now coming from
>>>>>>> no PSR tests to running this few PSR2 tests.
>>>>>>
>>>>>> Ok, I guess that ship sailed with the psr1 tests already then.
>>>>>
>>>>> besides, I think MST also deserves this "privilege" :)
>>>>
>>>> You misunderstood I think, I'm not saying we shouldn't test this. I'm
>>>> saying we shouldn't test this in BAT, but solve this problem for real,
>>>> through some dedicated machines that run specific tests as part of shards.
>>>> That's the real fix, and the fix that scales, and the fix that will allow
>>>> us to test a lot more than just a few BAT tests on a few very select
>>>> machines.
>>>
>>> Oh! I see now... That's indeed a very smarter way of scaling this.
>>>
>>> And maybe not necessarily "shard" machines and not necessarily running all IGT.
>>> And maybe some specific feature-machine.testlist that is part of the
>>> second round of CI-IGT...
>>
>> Yes, not a full "shard", just as part of the shard runs. We don't have
>> enough lab space to have a full shard for every interesting combination,
>> that's the underlying problem. Those special machines would only run psr
>> tests, or mst tests, or whatever else is special with them. Of course if
>> there's idle time we could maybe add more interesting tests to their
>> testlist.
>>
>> Also not sure where to maintain the testslist for these, maybe in igt
>> even. Issue is we want to make sure that any new psr test is added
>> automatically to the psr machines (as an example).
>>
>> Cheers, Daniel
>>>
>>> Martin? :$
>>>
>>>>
>>>> And imo as feature owners for this, _you_ folks should be fighting for
>>>> this, instead of being ok with squeezing a few tests into BAT. That's not
>>>> good enough (aside from that it's inefficient).
>>>>
>>>> I want more testing, not less. So should you :-)
> 
> I agree with the idea. I actually don't like to call the second round
> the "Shard Run", but rather want to call it CI Full.
> 
> Indeed, the piglit machines are only running piglit during the CI full
> run, and they are not sharded. Having more machines as part of the CI
> Full run, dedicated to execute a set of tests makes sense, even if it is
> part of IGT, is IMO the way to go for features requiring specific HW.
> 
> As to how to implement this, I think the testlist should be a whitelist
> hosted in the IGT repo. As for the CI system, I will let Tomi comment on
> this!

I think this is doable much like the piglit hosts are done. They each 
have gen-specific testlist (created with blacklists from Mesa CI repo) 
and around 40 minutes time to complete one run. Fastest pig-host uses 
about 20 minutes and idles 70% of the time.

Now, the interesting part comes when we create hardware setup with 
DP-MST (or PSR2 or 4 monitors) and want to run it through specific 
testlist with different software features. Lets say that the 40 minutes 
would be split between non-guc, guc, and iommu-enabled runs ... still 
possible to do, but need more hang recovery. It helps a lot timewise if 
there is as few hangs/incompletes as possible.

Tomi
-- 
Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo


More information about the igt-dev mailing list