[Intel-gfx] [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Mar 29 09:43:11 UTC 2018


On 29/03/2018 09:55, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-03-29 09:42:52)
>>
>> On 27/03/2018 22:01, Chris Wilson wrote:
>>> When we include a request's global_seqno in a GEM_TRACE it often helps
>>> to know how that relates to the current breadcrumb as seen by the
>>> hardware.
>>>
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> ---
>>>    drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
>>>    drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
>>>    2 files changed, 21 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>> index 2314a26cd7f8..585242831974 100644
>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>> @@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
>>>                struct i915_gem_timeline *timeline;
>>>                struct intel_timeline *tl = engine->timeline;
>>>    
>>> -             GEM_TRACE("%s seqno %d -> %d\n",
>>> -                       engine->name, tl->seqno, seqno);
>>> +             GEM_TRACE("%s seqno %d (current %d) -> %d\n",
>>> +                       engine->name,
>>> +                       tl->seqno,
>>> +                       intel_engine_get_seqno(engine),
>>> +                       seqno);
>>>    
>>>                if (!i915_seqno_passed(seqno, tl->seqno)) {
>>>                        /* Flush any waiters before we reuse the seqno */
>>> @@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
>>>        struct intel_engine_cs *engine = request->engine;
>>>        struct i915_gem_active *active, *next;
>>>    
>>> -     GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
>>> -               engine->name, intel_engine_get_seqno(engine),
>>> +     GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               request->global_seqno);
>>> +               request->global_seqno,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        lockdep_assert_held(&request->i915->drm.struct_mutex);
>>>        GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
>>> @@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
>>>        struct intel_engine_cs *engine = request->engine;
>>>        u32 seqno;
>>>    
>>> -     GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
>>> -               request->engine->name,
>>> +     GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               engine->timeline->seqno + 1);
>>> +               engine->timeline->seqno + 1,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        GEM_BUG_ON(!irqs_disabled());
>>>        lockdep_assert_held(&engine->timeline->lock);
>>> @@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
>>>    {
>>>        struct intel_engine_cs *engine = request->engine;
>>>    
>>> -     GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
>>> -               request->engine->name,
>>> +     GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               request->global_seqno);
>>> +               request->global_seqno,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        GEM_BUG_ON(!irqs_disabled());
>>>        lockdep_assert_held(&engine->timeline->lock);
>>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>>> index ed2c833a8b20..b5235f52a81b 100644
>>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>>> @@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>>>                        desc = execlists_update_context(rq);
>>>                        GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
>>>    
>>> -                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
>>> +                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>>>                                  engine->name, n,
>>>                                  port[n].context_id, count,
>>>                                  rq->global_seqno,
>>> +                               intel_engine_get_seqno(engine),
>>>                                  rq_prio(rq));
>>>                } else {
>>>                        GEM_BUG_ON(!n);
>>> @@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
>>>                                                        EXECLISTS_ACTIVE_USER));
>>>    
>>>                        rq = port_unpack(port, &count);
>>> -                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
>>> +                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>>>                                  engine->name,
>>>                                  port->context_id, count,
>>>                                  rq ? rq->global_seqno : 0,
>>> +                               intel_engine_get_seqno(engine),
>>>                                  rq ? rq_prio(rq) : 0);
>>>    
>>>                        /* Check the context/desc id for this event matches */
>>>
>>
>> The only thing I am not sure if HWS seqno is interesting in the two
>> above. But if you think it is it doesn't harm much.
> 
> I was thinking consistency, anyway I had a global_seqno in the trace, we
> would also have the current breadcrumb.
>   
>> Also in these two I had fence context:seqno instead to correlate easily
>> with (un)submit&co. That way when I select fence:context in an editor it
>> nicely highlights the whole flow relating to this request. Anyway:
> 
> Right, which I think was a very useful improvement.
>   
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> 
> Do you want to take this patch into yours or apply this and then fixup
> yours?

You can go with this one and I'll send a follow up at some point.

Regards,

Tvrtko



More information about the Intel-gfx mailing list