[Intel-gfx] [PATCH 2/8] drm/i915/execlists: Suppress preempting self
Chris Wilson
chris at chris-wilson.co.uk
Tue Jan 29 10:22:01 UTC 2019
Quoting Tvrtko Ursulin (2019-01-29 10:13:47)
>
> On 29/01/2019 08:58, Chris Wilson wrote:
> > diff --git a/drivers/gpu/drm/i915/selftests/intel_lrc.c b/drivers/gpu/drm/i915/selftests/intel_lrc.c
> > index 2b2ecd76c2ac..58f534a39118 100644
> > --- a/drivers/gpu/drm/i915/selftests/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/selftests/intel_lrc.c
> > @@ -268,6 +268,143 @@ static int live_late_preempt(void *arg)
> > goto err_ctx_lo;
> > }
> >
> > +struct preempt_client {
> > + struct igt_spinner spin;
> > + struct i915_gem_context *ctx;
> > +};
> > +
> > +static int preempt_client_init(struct drm_i915_private *i915,
> > + struct preempt_client *c)
> > +{
> > + c->ctx = kernel_context(i915);
> > + if (!c->ctx)
> > + return -ENOMEM;
> > +
> > + if (igt_spinner_init(&c->spin, i915))
> > + goto err_ctx;
> > +
> > + return 0;
> > +
> > +err_ctx:
> > + kernel_context_close(c->ctx);
> > + return -ENOMEM;
> > +}
> > +
> > +static void preempt_client_fini(struct preempt_client *c)
> > +{
> > + igt_spinner_fini(&c->spin);
> > + kernel_context_close(c->ctx);
> > +}
> > +
> > +static int live_suppress_preempt(void *arg)
> > +{
> > + struct drm_i915_private *i915 = arg;
> > + struct intel_engine_cs *engine;
> > + struct i915_sched_attr attr = {
> > + .priority = I915_USER_PRIORITY(I915_PRIORITY_MAX)
> > + };
> > + struct preempt_client a, b;
> > + enum intel_engine_id id;
> > + intel_wakeref_t wakeref;
> > + int err = -ENOMEM;
> > +
> > + /*
> > + * Verify that if a preemption request does not cause a change in
> > + * the current execution order, the preempt-to-idle injection is
> > + * skipped and that we do not accidentally apply it after the CS
> > + * completion event.
> > + */
> > +
> > + if (!HAS_LOGICAL_RING_PREEMPTION(i915))
> > + return 0;
> > +
> > + if (USES_GUC_SUBMISSION(i915))
> > + return 0; /* presume black blox */
> Pity but future proof I suppose. :/
Yeah, this definitely feels like an implementation detail (and that we
instrument the code to easily detect) that we will not be privy to in
future.
> Even though you bothered to add the counter into the guc path.
;)
> Good it's blox and not something worse. :)
I'm keeping that typo now!
> > + mutex_lock(&i915->drm.struct_mutex);
> > + wakeref = intel_runtime_pm_get(i915);
> > +
> > + if (preempt_client_init(i915, &a))
> > + goto err_unlock;
> > + if (preempt_client_init(i915, &b))
> > + goto err_client_a;
> > +
> > + for_each_engine(engine, i915, id) {
> > + struct i915_request *rq_a, *rq_b;
> > + int depth;
> > +
> > + engine->execlists.preempt_hang.count = 0;
> > +
> > + rq_a = igt_spinner_create_request(&a.spin,
> > + a.ctx, engine,
> > + MI_NOOP);
> > + if (IS_ERR(rq_a)) {
> > + err = PTR_ERR(rq_a);
> > + goto err_client_b;
> > + }
> > +
> > + i915_request_add(rq_a);
> > + if (!igt_wait_for_spinner(&a.spin, rq_a)) {
> > + pr_err("First client failed to start\n");
> > + goto err_wedged;
> > + }
> > +
> > + for (depth = 0; depth < 8; depth++) {
> > + rq_b = igt_spinner_create_request(&b.spin,
> > + b.ctx, engine,
> > + MI_NOOP);
> > + if (IS_ERR(rq_b)) {
> > + err = PTR_ERR(rq_b);
> > + goto err_client_b;
> > + }
> > + i915_request_add(rq_b);
> > +
> > + GEM_BUG_ON(i915_request_completed(rq_a));
> > + engine->schedule(rq_a, &attr);
> > + igt_spinner_end(&a.spin);
> > +
> > + if (!igt_wait_for_spinner(&b.spin, rq_b)) {
> > + pr_err("Second client failed to start\n");
> > + goto err_wedged;
> > + }
> > +
> > + swap(a, b);
> > + rq_a = rq_b;
>
> Could store rq in the clients as well I think.
Possibly, but rq_a, rq_b I've only used in this test so far, and if we
were to store them in the client struct, we should be more careful with
refs :)
-Chris
More information about the Intel-gfx
mailing list