[Intel-gfx] [PATCH] drm/i915/execlists: Tweak virtual unsubmission

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Oct 14 09:50:25 UTC 2019


On 14/10/2019 10:41, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
>>
>> On 13/10/2019 21:30, Chris Wilson wrote:
>>> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
>>> overtaking each other on preemption") we have restricted requests to run
>>> on their chosen engine across preemption events. We can take this
>>> restriction into account to know that we will want to resubmit those
>>> requests onto the same physical engine, and so can shortcircuit the
>>> virtual engine selection process and keep the request on the same
>>> engine during unwind.
>>>
>>> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>>>    drivers/gpu/drm/i915/i915_request.c | 2 +-
>>>    2 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> index e6bf633b48d5..03732e3f5ec7 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>        list_for_each_entry_safe_reverse(rq, rn,
>>>                                         &engine->active.requests,
>>>                                         sched.link) {
>>> -             struct intel_engine_cs *owner;
>>>    
>>>                if (i915_request_completed(rq))
>>>                        continue; /* XXX */
>>> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>                 * engine so that it can be moved across onto another physical
>>>                 * engine as load dictates.
>>>                 */
>>> -             owner = rq->hw_context->engine;
>>> -             if (likely(owner == engine)) {
>>> +             if (likely(rq->execution_mask == engine->mask)) {
>>>                        GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>>>                        if (rq_prio(rq) != prio) {
>>>                                prio = rq_prio(rq);
>>> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>                        list_move(&rq->sched.link, pl);
>>>                        active = rq;
>>>                } else {
>>> +                     struct intel_engine_cs *owner = rq->hw_context->engine;
>>
>> I guess there is some benefit in doing fewer operations as long as we
>> are fixing the engine anyway (at the moment at least).
>>
>> However on this branch here the concern was request completion racing
>> with preemption handling and with this change the breadcrumb will not
>> get canceled any longer and may get signaled on the virtual engine.
>> Which then leads to the explosion this branch fixed. At least that's
>> what I remembered in combination with the comment below..
> 
> No, we don't change back to the virtual engine, so that is not an issue.
> The problem was only because of the rq->engine = owner where the
> breadcrumbs were still on the previous engine lists and assumed to be
> under that engine->breadcrumbs.lock (but would in future be assumed to be
> under rq->engine->breadcrumbs.lock).

Breadcrumb signaling can only be set up on the physical engine? Hm, must 
be fine since without preemption that would be the scenario exactly. 
Okay, I see there is r-b from Ram already so no need for another one.

Regards,

Tvrtko






More information about the Intel-gfx mailing list