[Intel-gfx] ✓ Fi.CI.BAT: success for IGT PMU support (rev18)

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Nov 22 11:57:19 UTC 2017


Hi guys,

On 22/11/2017 11:41, Patchwork wrote:

[snip]

> Testlist changes:
> +igt at perf_pmu@all-busy-check-all
> +igt at perf_pmu@busy-bcs0
> +igt at perf_pmu@busy-check-all-bcs0
> +igt at perf_pmu@busy-check-all-rcs0
> +igt at perf_pmu@busy-check-all-vcs0
> +igt at perf_pmu@busy-check-all-vcs1
> +igt at perf_pmu@busy-check-all-vecs0
> +igt at perf_pmu@busy-no-semaphores-bcs0
> +igt at perf_pmu@busy-no-semaphores-rcs0
> +igt at perf_pmu@busy-no-semaphores-vcs0
> +igt at perf_pmu@busy-no-semaphores-vcs1
> +igt at perf_pmu@busy-no-semaphores-vecs0
> +igt at perf_pmu@busy-rcs0
> +igt at perf_pmu@busy-vcs0
> +igt at perf_pmu@busy-vcs1
> +igt at perf_pmu@busy-vecs0
> +igt at perf_pmu@cpu-hotplug
> +igt at perf_pmu@event-wait-rcs0
> +igt at perf_pmu@frequency
> +igt at perf_pmu@idle-bcs0
> +igt at perf_pmu@idle-no-semaphores-bcs0
> +igt at perf_pmu@idle-no-semaphores-rcs0
> +igt at perf_pmu@idle-no-semaphores-vcs0
> +igt at perf_pmu@idle-no-semaphores-vcs1
> +igt at perf_pmu@idle-no-semaphores-vecs0
> +igt at perf_pmu@idle-rcs0
> +igt at perf_pmu@idle-vcs0
> +igt at perf_pmu@idle-vcs1
> +igt at perf_pmu@idle-vecs0
> +igt at perf_pmu@init-busy-bcs0
> +igt at perf_pmu@init-busy-rcs0
> +igt at perf_pmu@init-busy-vcs0
> +igt at perf_pmu@init-busy-vcs1
> +igt at perf_pmu@init-busy-vecs0
> +igt at perf_pmu@init-sema-bcs0
> +igt at perf_pmu@init-sema-rcs0
> +igt at perf_pmu@init-sema-vcs0
> +igt at perf_pmu@init-sema-vcs1
> +igt at perf_pmu@init-sema-vecs0
> +igt at perf_pmu@init-wait-bcs0
> +igt at perf_pmu@init-wait-rcs0
> +igt at perf_pmu@init-wait-vcs0
> +igt at perf_pmu@init-wait-vcs1
> +igt at perf_pmu@init-wait-vecs0
> +igt at perf_pmu@interrupts
> +igt at perf_pmu@invalid-init
> +igt at perf_pmu@most-busy-check-all-bcs0
> +igt at perf_pmu@most-busy-check-all-rcs0
> +igt at perf_pmu@most-busy-check-all-vcs0
> +igt at perf_pmu@most-busy-check-all-vcs1
> +igt at perf_pmu@most-busy-check-all-vecs0
> +igt at perf_pmu@multi-client-bcs0
> +igt at perf_pmu@multi-client-rcs0
> +igt at perf_pmu@multi-client-vcs0
> +igt at perf_pmu@multi-client-vcs1
> +igt at perf_pmu@multi-client-vecs0
> +igt at perf_pmu@other-init-0
> +igt at perf_pmu@other-init-1
> +igt at perf_pmu@other-init-2
> +igt at perf_pmu@other-init-3
> +igt at perf_pmu@other-init-4
> +igt at perf_pmu@other-init-5
> +igt at perf_pmu@other-init-6
> +igt at perf_pmu@other-read-0
> +igt at perf_pmu@other-read-1
> +igt at perf_pmu@other-read-2
> +igt at perf_pmu@other-read-3
> +igt at perf_pmu@other-read-4
> +igt at perf_pmu@other-read-5
> +igt at perf_pmu@other-read-6
> +igt at perf_pmu@rc6
> +igt at perf_pmu@rc6p
> +igt at perf_pmu@render-node-busy-bcs0
> +igt at perf_pmu@render-node-busy-rcs0
> +igt at perf_pmu@render-node-busy-vcs0
> +igt at perf_pmu@render-node-busy-vcs1
> +igt at perf_pmu@render-node-busy-vecs0
> +igt at perf_pmu@semaphore-wait-bcs0
> +igt at perf_pmu@semaphore-wait-rcs0
> +igt at perf_pmu@semaphore-wait-vcs0
> +igt at perf_pmu@semaphore-wait-vcs1
> +igt at perf_pmu@semaphore-wait-vecs0

Would it be possible to have a test run of these new tests on the shards?

If successful then we can add it to the testlist. Total runtime should 
be up to 30 seconds.

But I wouldn't be surprised if there will be issues since I was only 
able to test on SKL during development.

Regards,

Tvrtko


More information about the Intel-gfx mailing list