[PATCH 2/7] drm/xe: Add helper to capture context runtime

Lucas De Marchi lucas.demarchi at intel.com
Tue Apr 16 13:42:39 UTC 2024


On Tue, Apr 16, 2024 at 10:56:13AM +0530, Vivekanandan, Balasubramani wrote:
>On 15.04.2024 20:04, Lucas De Marchi wrote:
>> From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
>>
>> Add a helper to update the runtime of an exec_queue accumulate it at 2
>> places:
>>
>> 1. when the exec_queue is destroyed
>> 2. when the sched job is completed
>>
>> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
>> Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
>> ---
>>  drivers/gpu/drm/xe/xe_device_types.h |  9 +++++++
>>  drivers/gpu/drm/xe/xe_exec_queue.c   | 37 ++++++++++++++++++++++++++++
>>  drivers/gpu/drm/xe/xe_exec_queue.h   |  1 +
>>  drivers/gpu/drm/xe/xe_sched_job.c    |  2 ++
>>  4 files changed, 49 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
>> index 60ced5f90c2b..f6632b4d8399 100644
>> --- a/drivers/gpu/drm/xe/xe_device_types.h
>> +++ b/drivers/gpu/drm/xe/xe_device_types.h
>> @@ -553,6 +553,15 @@ struct xe_file {
>>  		struct mutex lock;
>>  	} exec_queue;
>>
>> +	/**
>> +	 * @runtime: hw engine class runtime in ticks for this drm client
>> +	 *
>> +	 * Only stats from xe_exec_queue->lrc[0] are accumulated. For multi-lrc
>> +	 * case, since all jobs run in parallel on the engines, only the stats
>> +	 * from lrc[0] are sufficient.
>> +	 */
>> +	u64 runtime[XE_ENGINE_CLASS_MAX];
>> +
>>  	/** @client: drm client */
>>  	struct xe_drm_client *client;
>>  };
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index 71bd52dfebcf..c752d292fd33 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -214,6 +214,8 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
>>  {
>>  	int i;
>>
>> +	xe_exec_queue_update_runtime(q);
>> +
>>  	for (i = 0; i < q->width; ++i)
>>  		xe_lrc_finish(q->lrc + i);
>>  	if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
>> @@ -769,6 +771,41 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
>>  		q->lrc[0].fence_ctx.next_seqno - 1;
>>  }
>>
>> +/**
>> + * xe_exec_queue_update_runtime() - Update runtime for this exec queue from hw
>> + * @q: The exec queue
>> + *
>> + * Update the timestamp saved by HW for this exec queue and save runtime
>> + * calculated by using the delta from last update. On multi-lrc case, only the
>> + * first is considered.
>> + */
>> +void xe_exec_queue_update_runtime(struct xe_exec_queue *q)
>> +{
>> +	struct xe_file *xef;
>> +	struct xe_lrc *lrc;
>> +	u32 old_ts, new_ts;
>> +
>> +	/*
>> +	 * Jobs that are run during driver load may use an exec_queue, but are
>> +	 * not associated with a user xe file, so avoid accumulating busyness
>> +	 * for kernel specific work.
>> +	 */
>> +	if (!q->vm || !q->vm->xef)
>> +		return;
>> +
>> +	xef = q->vm->xef;
>> +	lrc = &q->lrc[0];
>> +
>> +	new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
>> +
>> +	/*
>> +	 * Special case the very first timestamp: we don't want the
>> +	 * initial delta to be a huge value
>> +	 */
>> +	if (old_ts)
>> +		xef->runtime[q->class] += new_ts - old_ts;
>What is the need for accumulating the delta instead of using the
>absolute timestamp read from CTX_TIMESTAMP?
>This would break if xe_lrc_update_timestamp() is called from some
>additional places in future. The delta would be incorrect.

can you clarify the breakage?

- CTX_TIMESTAMP is per context (or exec_queue if you want to use the sw
   name)
- Reported runtime is per client.
- any update to xef->runtime[] should only ever be done by
   xe_lrc_update_timestamp()

Anytime xe_lrc_update_timestamp() is called, it updates the timestamp,
saves the new one in the lrc, and updates the delta in the xef. The
value in xef is the **runtime** for all the exec_queues created by that
client, per engine class.

Note that we already call it from multiple places with this patch
series:

1. when the exec_queue is destroyed
2. when the sched job is completed
3. when userspace queries the runtime

... so I don't think I understood what would break.

Lucas De Marchi


More information about the Intel-xe mailing list