[Intel-gfx] [PATCH 19/41] drm/i915/gt: Show scheduler queues when dumping state
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Wed Jan 27 14:50:19 UTC 2021
On 27/01/2021 14:35, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2021-01-27 14:13:11)
>>
>> On 25/01/2021 14:01, Chris Wilson wrote:
>>> Move the scheduler pretty printer from out of the execlists state to
>>> match its more common location.
>>>
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> ---
>>> drivers/gpu/drm/i915/gt/intel_engine_cs.c | 34 +++++++++++++----------
>>> 1 file changed, 19 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> index cdd07aeada05..2f9a8960144b 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> @@ -1443,20 +1443,15 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
>>>
>>> if (intel_engine_in_guc_submission_mode(engine)) {
>>> /* nothing to print yet */
>>> - } else if (HAS_EXECLISTS(dev_priv)) {
>>> - struct i915_request * const *port, *rq;
>>> const u32 *hws =
>>> &engine->status_page.addr[I915_HWS_CSB_BUF0_INDEX];
>>> const u8 num_entries = execlists->csb_size;
>>> unsigned int idx;
>>> u8 read, write;
>>>
>>> - drm_printf(m, "\tExeclist tasklet queued? %s (%s), preempt? %s, timeslice? %s\n",
>>> - yesno(test_bit(TASKLET_STATE_SCHED,
>>> - &engine->active.tasklet.state)),
>>> - enableddisabled(!atomic_read(&engine->active.tasklet.count)),
>>> - repr_timer(&engine->execlists.preempt),
>>> - repr_timer(&engine->execlists.timer));
>>> + drm_printf(m, "\tExeclists preempt? %s, timeslice? %s\n",
>>> + repr_timer(&execlists->preempt),
>>> + repr_timer(&execlists->timer));
>>>
>>> read = execlists->csb_head;
>>> write = READ_ONCE(*execlists->csb_write);
>>> @@ -1477,6 +1472,22 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
>>> drm_printf(m, "\tExeclist CSB[%d]: 0x%08x, context: %d\n",
>>> idx, hws[idx * 2], hws[idx * 2 + 1]);
>>> }
>>> + } else if (INTEL_GEN(dev_priv) > 6) {
>>> + drm_printf(m, "\tPP_DIR_BASE: 0x%08x\n",
>>> + ENGINE_READ(engine, RING_PP_DIR_BASE));
>>> + drm_printf(m, "\tPP_DIR_BASE_READ: 0x%08x\n",
>>> + ENGINE_READ(engine, RING_PP_DIR_BASE_READ));
>>> + drm_printf(m, "\tPP_DIR_DCLV: 0x%08x\n",
>>> + ENGINE_READ(engine, RING_PP_DIR_DCLV));
>>> + }
>>> +
>>> + if (engine->active.tasklet.func) {
>>> + struct i915_request * const *port, *rq;
>>> +
>>> + drm_printf(m, "\tTasklet queued? %s (%s)\n",
>>> + yesno(test_bit(TASKLET_STATE_SCHED,
>>> + &engine->active.tasklet.state)),
>>> + enableddisabled(!atomic_read(&engine->active.tasklet.count)));
>>
>> Or have i915_sched_print_state() exported? Again it will depend on how
>> clean split will be possible.
>
> Not quite, unfortunately this is not dumping generic state but the
> backend bookkeeping for execlists/ringscheduler. Common for that pair,
> not so common with the guc.
>
> I guess I oversold it.
Okay I see it after a less superficial look. I guess it's okay. Too hard
to get perfect separation so I'll focus on the scheduling changes.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list