[igt-dev] [Intel-gfx] [PATCH i-g-t] i915/gem_exec_parse: Switch to a fixed timeout for basic-allocations

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Feb 11 17:18:02 UTC 2019


On 11/02/2019 14:35, Chris Wilson wrote:
> basic-allocations was written to demonstrate a flaw in our continual
> reallocation of cmdparser shadow bo, largely fixed by keeping a small
> cache of bo of different lengths (to speed up the search for the correct
> sized bo). We only care enough to exercise the slowdown by submitting
> lots of execbufs, and can see the effect of bo caching on the rate, so
> replace the fixed number of iterations with a timeout and count how many
> batches we could submit instead.
> 
> Similarly, we now do not need to wait for all of our queue to complete
> as we can tell the kernel to drop the queue instead.
> 
> References: https://bugs.freedesktop.org/show_bug.cgi?id=107936
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> ---
>   tests/i915/gem_exec_parse.c | 18 +++++++++++-------
>   1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/tests/i915/gem_exec_parse.c b/tests/i915/gem_exec_parse.c
> index b653b1bdc..62e8d0a51 100644
> --- a/tests/i915/gem_exec_parse.c
> +++ b/tests/i915/gem_exec_parse.c
> @@ -303,15 +303,15 @@ test_lri(int fd, uint32_t handle, struct test_lri *test)
>   
>   static void test_allocations(int fd)
>   {
> -	uint32_t bbe = MI_BATCH_BUFFER_END;
> +	const uint32_t bbe = MI_BATCH_BUFFER_END;
>   	struct drm_i915_gem_execbuffer2 execbuf;
>   	struct drm_i915_gem_exec_object2 obj[17];
> -	int i, j;
> +	unsigned long count;
>   
>   	intel_require_memory(2, 1ull<<(12 + ARRAY_SIZE(obj)), CHECK_RAM);
>   
>   	memset(obj, 0, sizeof(obj));
> -	for (i = 0; i < ARRAY_SIZE(obj); i++) {
> +	for (int i = 0; i < ARRAY_SIZE(obj); i++) {
>   		uint64_t size = 1ull << (12 + i);
>   
>   		obj[i].handle = gem_create(fd, size);
> @@ -322,17 +322,21 @@ static void test_allocations(int fd)
>   
>   	memset(&execbuf, 0, sizeof(execbuf));
>   	execbuf.buffer_count = 1;
> -	for (j = 0; j < 16384; j++) {
> -		igt_progress("allocations ", j, 16384);
> -		i = rand() % ARRAY_SIZE(obj);
> +
> +	count = 0;
> +	igt_until_timeout(20) {
> +		int i = rand() % ARRAY_SIZE(obj);
>   		execbuf.buffers_ptr = to_user_pointer(&obj[i]);
>   		execbuf.batch_start_offset = (rand() % (1ull<<i)) << 12;
>   		execbuf.batch_start_offset += 64 * (rand() % 64);
>   		execbuf.batch_len = (1ull<<(12+i)) - execbuf.batch_start_offset;
>   		gem_execbuf(fd, &execbuf);
> +		count++;
>   	}
> +	igt_info("Submitted %lu execbufs\n", count);
> +	igt_drop_caches_set(fd, DROP_RESET_ACTIVE); /* Cancel the queued work */

Downside here is that tests start to exercise a lot more driver paths. 
Or is that an upside? It's confusing these days.

I'd prefer if we just let it run and don't involve wedge/unwedge. Well 
actually... we could modify the submit loop to sync a bit rather than 
build a queue for 20 seconds? Would sync after each execbuf be 
detrimental to test goals? Alternatively submit maybe ARRAY_SIZE worth 
and then sync?

Regards,

Tvrtko

>   
> -	for (i = 0; i < ARRAY_SIZE(obj); i++) {
> +	for (int i = 0; i < ARRAY_SIZE(obj); i++) {
>   		gem_sync(fd, obj[i].handle);
>   		gem_close(fd, obj[i].handle);
>   	}
> 



More information about the igt-dev mailing list