[Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise preemption timeout
Antonio Argenziano
antonio.argenziano at intel.com
Fri Apr 13 17:20:02 UTC 2018
On 13/04/18 08:59, Chris Wilson wrote:
> Quoting Antonio Argenziano (2018-04-13 16:54:27)
>>
>>
>> On 13/04/18 07:14, Chris Wilson wrote:
>>> Set up a unpreemptible spinner such that the only way we can inject a
>>> high priority request onto the GPU is by resetting the spinner. The test
>>> fails if we trigger hangcheck rather than the fast timeout mechanism.
>>>
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> ---
>>> lib/i915/gem_context.c | 72 +++++++++++++++++++++++++++++++--------
>>> lib/i915/gem_context.h | 3 ++
>>> lib/igt_dummyload.c | 12 +++++--
>>> lib/igt_dummyload.h | 3 ++
>>> tests/gem_exec_schedule.c | 34 ++++++++++++++++++
>>> 5 files changed, 106 insertions(+), 18 deletions(-)
>>>
>>
>> ...
>>
>>> @@ -449,8 +457,6 @@ void igt_spin_batch_end(igt_spin_t *spin)
>>> if (!spin)
>>> return;
>>>
>>> - igt_assert(*spin->batch == MI_ARB_CHK ||
>>> - *spin->batch == MI_BATCH_BUFFER_END);
>>
>> I am not sure why we needed this, but it seems safe to remove.
>>
>>> *spin->batch = MI_BATCH_BUFFER_END;
>>> __sync_synchronize();
>>> }
>>
>>> diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c
>>> index 6ff15b6ef..93254945b 100644
>>> --- a/tests/gem_exec_schedule.c
>>> +++ b/tests/gem_exec_schedule.c
>>> @@ -656,6 +656,37 @@ static void preemptive_hang(int fd, unsigned ring)
>>> gem_context_destroy(fd, ctx[HI]);
>>> }
>>>
>>> +static void preempt_timeout(int fd, unsigned ring)
>>> +{
>>> + igt_spin_t *spin[3];
>>> + uint32_t ctx;
>>> +
>>> + igt_require(__gem_context_set_preempt_timeout(fd, 0, 0));
>>> +
>>> + ctx = gem_context_create(fd);
>>> + gem_context_set_priority(fd, ctx, MIN_PRIO);
>>> + spin[0] = __igt_spin_batch_new_hang(fd, ctx, ring);
>>> + spin[1] = __igt_spin_batch_new_hang(fd, ctx, ring);
Should we send MAX_ELSP_QLEN batches to match other preemption tests?
>>> + gem_context_destroy(fd, ctx);
>>> +
>>> + ctx = gem_context_create(fd);
>>> + gem_context_set_priority(fd, ctx, MAX_PRIO);
>>> + gem_context_set_preempt_timeout(fd, ctx, 1000 * 1000);
>>> + spin[2] = __igt_spin_batch_new(fd, ctx, ring, 0);
>>> + gem_context_destroy(fd, ctx);
>>> +
>>> + igt_spin_batch_end(spin[2]);
>>> + gem_sync(fd, spin[2]->handle);
>>
>> Does this guarantee that spin[1] did not overtake spin[2]?
>
> It does as well. Neither spin[0] or spin[1] can complete without being
> reset at this point. If they are reset (by hangcheck) we detect that and
Cool.
> die. What we expect to happen is spin[0] is (more or less, there is still
> dmesg) silently killed by the preempt timeout. If that timeout doesn't
The silent part is interesting, how do we make sure that during normal
preemption operations (e.g. preempt on an ARB_CHECK) we didn't silently
discard the preempted batch? Do we care?
Test looks good,
Reviewed-by: Antonio Argenziano <antonio.argenziano at intel.com>
Thanks,
Antonio
> happen, more hangcheck. What we don't check here is how quick. Now we
> could reasonably assert that the spin[2] -> gem_sync takes less than 2ms.
> -Chris
>
More information about the Intel-gfx
mailing list