[Intel-gfx] [PATCH 3/8] drm/i915: Automatic i915_switch_context for legacy
Chris Wilson
chris at chris-wilson.co.uk
Thu Nov 16 14:44:31 UTC 2017
Quoting Mika Kuoppala (2017-11-16 14:00:13)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
>
> > During request construction, after pinning the context we know whether
> > or not we have to emit a context switch. So move this common operation
> > from every caller into i915_gem_request_alloc() itself.
> >
> > v2: Always submit the request if we emitted some commands during request
> > construction, as typically it also involves changes in global state.
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > ---
> > drivers/gpu/drm/i915/i915_gem.c | 2 +-
> > drivers/gpu/drm/i915/i915_gem_context.c | 7 +------
> > drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 --------
> > drivers/gpu/drm/i915/i915_gem_request.c | 18 +++++++++++++-----
> > drivers/gpu/drm/i915/i915_perf.c | 3 +--
> > drivers/gpu/drm/i915/intel_ringbuffer.c | 4 ++++
> > drivers/gpu/drm/i915/selftests/huge_pages.c | 4 ----
> > drivers/gpu/drm/i915/selftests/i915_gem_context.c | 4 ----
> > drivers/gpu/drm/i915/selftests/i915_gem_request.c | 10 ----------
> > drivers/gpu/drm/i915/selftests/intel_hangcheck.c | 4 ----
> > 10 files changed, 20 insertions(+), 44 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index a7979b74ce21..d885cf0d2943 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -5043,7 +5043,7 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
> > goto out_ctx;
> > }
> >
> > - err = i915_switch_context(rq);
> > + err = 0;
> > if (engine->init_context)
> > err = engine->init_context(rq);
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> > index 2db040695035..c1efbaf02bf2 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> > @@ -842,8 +842,7 @@ int i915_switch_context(struct drm_i915_gem_request *req)
> > struct intel_engine_cs *engine = req->engine;
> >
> > lockdep_assert_held(&req->i915->drm.struct_mutex);
> > - if (i915_modparams.enable_execlists)
> > - return 0;
> > + GEM_BUG_ON(i915_modparams.enable_execlists);
> >
> > if (!req->ctx->engine[engine->id].state) {
> > struct i915_gem_context *to = req->ctx;
> > @@ -899,7 +898,6 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv)
> >
> > for_each_engine(engine, dev_priv, id) {
> > struct drm_i915_gem_request *req;
> > - int ret;
> >
> > if (engine_has_idle_kernel_context(engine))
> > continue;
> > @@ -922,10 +920,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv)
> > GFP_KERNEL);
> > }
> >
> > - ret = i915_switch_context(req);
> > i915_add_request(req);
> > - if (ret)
> > - return ret;
> > }
> >
> > return 0;
> > diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > index 435ed95df144..85c7e8afe26e 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > @@ -1115,10 +1115,6 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
> > if (err)
> > goto err_request;
> >
> > - err = i915_switch_context(rq);
> > - if (err)
> > - goto err_request;
> > -
> > err = eb->engine->emit_bb_start(rq,
> > batch->node.start, PAGE_SIZE,
> > cache->gen > 5 ? 0 : I915_DISPATCH_SECURE);
> > @@ -1965,10 +1961,6 @@ static int eb_submit(struct i915_execbuffer *eb)
> > if (err)
> > return err;
> >
> > - err = i915_switch_context(eb->request);
> > - if (err)
> > - return err;
> > -
> > if (eb->args->flags & I915_EXEC_GEN7_SOL_RESET) {
> > err = i915_reset_gen7_sol_offsets(eb->request);
> > if (err)
> > diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> > index e0d6221022a8..445495f9893c 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_request.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> > @@ -624,6 +624,10 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
> > if (ret)
> > goto err_unpin;
> >
> > + ret = intel_ring_wait_for_space(ring, MIN_SPACE_FOR_ADD_REQUEST);
> > + if (ret)
> > + goto err_unreserve;
> > +
> > /* Move the oldest request to the slab-cache (if not in use!) */
> > req = list_first_entry_or_null(&engine->timeline->requests,
> > typeof(*req), link);
> > @@ -703,10 +707,6 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
> > req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
> > GEM_BUG_ON(req->reserved_space < engine->emit_breadcrumb_sz);
> >
> > - ret = engine->request_alloc(req);
> > - if (ret)
> > - goto err_ctx;
> > -
> > /* Record the position of the start of the request so that
> > * should we detect the updated seqno part-way through the
> > * GPU processing the request, we never over-estimate the
> > @@ -714,16 +714,24 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
> > */
> > req->head = req->ring->emit;
> >
> > + ret = engine->request_alloc(req);
> > + if (ret)
> > + goto err_ctx;
> > +
>
> Ok as we don't emit anything with request alloc you can switch the
> order.
> > /* Check that we didn't interrupt ourselves with a new request */
> > GEM_BUG_ON(req->timeline->seqno != req->fence.seqno);
> > return req;
> >
> > err_ctx:
> > + if (req->ring->emit != req->head) {
> > + __i915_add_request(req, false);
> > + return ERR_PTR(ret);
>
> ..so why this check. What could emit and why the early return?
We check just in case request_alloc did emit some commands, such as
MI_SET_CONTEXT but failed to emit LRI to update the ppgtt. In such a
case, we would have updated engine->legacy_active_context and so we must
commit the request to keep the SW bookkeeping in track with the HW.
So we assume that if we did emit a command, we need to submit.
> And also more in generally, we do drop some error return patchs
> due to removing switch_context(). Is there a risk of ignoring
> and not propagating some error code we did actually catch
> at i915_switch_context()?
No. The error returns still propagate through request_alloc.
-Chris
More information about the Intel-gfx
mailing list