[Intel-gfx] [PATCH v2 08/11] drm/i915/execlists: Keep request->priority for its lifetime
Michał Winiarski
michal.winiarski at intel.com
Thu Sep 28 09:14:47 UTC 2017
On Wed, Sep 27, 2017 at 04:44:37PM +0000, Chris Wilson wrote:
> With preemption, we will want to "unsubmit" a request, taking it back
> from the hw and returning it to the priority sorted execution list. In
> order to know where to insert it into that list, we need to remember
> its adjust priority (which may change even as it was being executed).
This has a (positive IMO) side effect that should be mentioned here.
Starting from:
drm/i915/execlists: Unwind incomplete requests on resets
We were using the fact, that were overwriting the priority on submit to HW, to
resubmit the same requests on GPU reset.
Since now we keep track of the priority, we're going to 'resubmit' with correct
priority, making reset another "preemption" point. A quicker and more reliable
preemption! Although a bit sad for the requests that were "preempted" this way :)
Please, add the same behavior to GuC submission path.
With that, and expanded commit msg:
Reviewed-by: Michał Winiarski <michal.winiarski at intel.com>
-Michał
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Michal Winiarski <michal.winiarski at intel.com>
> ---
> drivers/gpu/drm/i915/intel_lrc.c | 14 ++++++++++----
> 1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index e32109265eb9..7ac92a77aea8 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -585,8 +585,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> }
>
> INIT_LIST_HEAD(&rq->priotree.link);
> - rq->priotree.priority = INT_MAX;
> -
> __i915_gem_request_submit(rq);
> trace_i915_gem_request_in(rq, port_index(port, execlists));
> last = rq;
> @@ -794,6 +792,7 @@ static void intel_lrc_irq_handler(unsigned long data)
> execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
>
> trace_i915_gem_request_out(rq);
> + rq->priotree.priority = INT_MAX;
> i915_gem_request_put(rq);
>
> execlists_port_complete(execlists, port);
> @@ -846,11 +845,15 @@ static void execlists_submit_request(struct drm_i915_gem_request *request)
> spin_unlock_irqrestore(&engine->timeline->lock, flags);
> }
>
> +static struct drm_i915_gem_request *pt_to_request(struct i915_priotree *pt)
> +{
> + return container_of(pt, struct drm_i915_gem_request, priotree);
> +}
> +
> static struct intel_engine_cs *
> pt_lock_engine(struct i915_priotree *pt, struct intel_engine_cs *locked)
> {
> - struct intel_engine_cs *engine =
> - container_of(pt, struct drm_i915_gem_request, priotree)->engine;
> + struct intel_engine_cs *engine = pt_to_request(pt)->engine;
>
> GEM_BUG_ON(!locked);
>
> @@ -904,6 +907,9 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
> * engines.
> */
> list_for_each_entry(p, &pt->signalers_list, signal_link) {
> + if (i915_gem_request_completed(pt_to_request(p->signaler)))
> + continue;
> +
> GEM_BUG_ON(p->signaler->priority < pt->priority);
> if (prio > READ_ONCE(p->signaler->priority))
> list_move_tail(&p->dfs_link, &dfs);
> --
> 2.14.1
>
More information about the Intel-gfx
mailing list