[PATCH] drm/xe: Add per-engine pagefault and reset counts

Matthew Brost matthew.brost at intel.com
Wed Feb 12 23:45:22 UTC 2025


On Wed, Feb 12, 2025 at 02:48:28PM -0600, Lucas De Marchi wrote:
> On Tue, Feb 11, 2025 at 09:56:02PM -0800, Matthew Brost wrote:
> > On Tue, Feb 11, 2025 at 11:37:12PM -0600, Lucas De Marchi wrote:
> > > On Tue, Feb 11, 2025 at 03:44:40PM -0500, Rodrigo Vivi wrote:
> > > > On Mon, Feb 10, 2025 at 07:36:57PM +0000, Jonathan Cavitt wrote:
> > > > > Add counters to all engines that count the number of pagefaults and
> > > > > engine resets that have been triggered on them.  Report these values
> > > > > during an engine reset.
> > > >
> > > > My fear is that later someone start using this as some form of metric.
> > > >
> > > > Could we keep this behind a debug config?
> > > >
> > > > >
> > > > > Signed-off-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
> > > > > CC: Tomasz Mistat <tomasz.mistat at intel.com>
> > > > > CC: Ayaz A Siddiqui <ayaz.siddiqui at intel.com>
> > > > > CC: Niranjana Vishwanathapura <niranjana.vishwanathapura at intel.com>
> > > > > ---
> > > > >  drivers/gpu/drm/xe/xe_gt_pagefault.c    | 6 ++++++
> > > > >  drivers/gpu/drm/xe/xe_guc_submit.c      | 9 +++++++--
> > > > >  drivers/gpu/drm/xe/xe_hw_engine.c       | 3 +++
> > > > >  drivers/gpu/drm/xe/xe_hw_engine_types.h | 4 ++++
> > > > >  4 files changed, 20 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > > > index 46701ca11ce0..04e973b20019 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > > > @@ -130,6 +130,7 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct pagefault *pf,
> > > > >  {
> > > > >  	struct xe_vm *vm = xe_vma_vm(vma);
> > > > >  	struct xe_tile *tile = gt_to_tile(gt);
> > > > > +	struct xe_hw_engine *hwe = NULL;
> > > > >  	struct drm_exec exec;
> > > > >  	struct dma_fence *fence;
> > > > >  	ktime_t end = 0;
> > > > > @@ -140,6 +141,11 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct pagefault *pf,
> > > > >  	xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_BYTES, xe_vma_size(vma));
> > > > >
> > > > >  	trace_xe_vma_pagefault(vma);
> > > > > +
> > > > > +	hwe = xe_gt_hw_engine(gt, pf->engine_class, pf->engine_instance, false);
> > > > > +	if (hwe)
> > > > > +		atomic_inc(&hwe->pagefault_count);
> > > > > +
> > > > >  	atomic = access_is_atomic(pf->access_type);
> > > > >
> > > > >  	/* Check if VMA is valid */
> > > > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > > index 913c74d6e2ae..6f5d74340319 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > > @@ -1972,6 +1972,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
> > > > >  {
> > > > >  	struct xe_gt *gt = guc_to_gt(guc);
> > > > >  	struct xe_exec_queue *q;
> > > > > +	struct xe_hw_engine *hwe;
> > > > >  	u32 guc_id;
> > > > >
> > > > >  	if (unlikely(len < 1))
> > > > > @@ -1983,8 +1984,12 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
> > > > >  	if (unlikely(!q))
> > > > >  		return -EPROTO;
> > > > >
> > > > > -	xe_gt_info(gt, "Engine reset: engine_class=%s, logical_mask: 0x%x, guc_id=%d",
> > > > > -		   xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
> > > > > +	hwe = q->hwe;
> > > > > +	atomic_inc(&hwe->reset_count);
> > > > > +
> > > > > +	xe_gt_info(gt, "Engine reset: engine_class=%s, logical_mask: 0x%x, guc_id=%d, pagefault_count=%u, reset_count=%u",
> > > 
> > > I don't think the message was accurate here about being an engine reset
> > > and this is probably making it worse. +Matthew Brost
> > > 
> > 
> > A bit confused by why we need this information in dmesg, but also not
> > opposed to more information.
> 
> what I meant by message not being accurate is that this is a handler
> for:
> 
> 	process_g2h_msg()
> 		XE_GUC_ACTION_CONTEXT_RESET_NOTIFICATION -> xe_guc_exec_queue_reset_handler
> 
> Do we get that notification on each exec queue? wouldn't that show

We get a notification for the exec queue which has hung.

> multiple "engine reset" for a single reset?

No.

>

Also noticed the code as written is not accurate for virtual or parallel
queues. q->hwe is not guaranteed to be the engine where the queue is
running when it hung in these cases. So that itself is also a problem. I
don't have an immediate idea of how to resolve this.

I wanted to remove q->hwe at one point to avoid this confusion but
haven't gotten around to it.

Matt

> Lucas De Marchi
> 
> > 
> > > In any case, instead of polluting this, what about printing the counter
> > > under the "stats" file in debugfs?
> > 
> > Agree. I already suggested this [1] but accidentally replied to the
> > wrong list / post.
> > 
> > The current stats interface for a GT but IMO is light weight enough we
> > can more or less copy / paste for other scopes (e.g. device, tile,
> > engines) hooking into debugfs. The idea for this 'stats' interface was
> > for IGTs or quick profiling of workloads in a generic way. This seems to
> > fit with this use case.
> > 
> > [1] https://patchwork.freedesktop.org/patch/636270/?series=144622&rev=1#comment_1162350
> > 
> > Matt
> > 
> > > 
> > > Lucas De Marchi
> > > 
> > > 
> > > > > +		   xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id,
> > > > > +		   atomic_read(&hwe->pagefault_count), atomic_read(&hwe->reset_count));
> > > > >
> > > > >  	trace_xe_exec_queue_reset(q);
> > > > >
> > > > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> > > > > index fc447751fe78..0be6c38fe2a4 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> > > > > @@ -516,6 +516,9 @@ static void hw_engine_init_early(struct xe_gt *gt, struct xe_hw_engine *hwe,
> > > > >  	hwe->fence_irq = &gt->fence_irq[info->class];
> > > > >  	hwe->engine_id = id;
> > > > >
> > > > > +	atomic_set(&hwe->pagefault_count, 0);
> > > > > +	atomic_set(&hwe->reset_count, 0);
> > > > > +
> > > > >  	hwe->eclass = &gt->eclass[hwe->class];
> > > > >  	if (!hwe->eclass->sched_props.job_timeout_ms) {
> > > > >  		hwe->eclass->sched_props.job_timeout_ms = 5 * 1000;
> > > > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_types.h b/drivers/gpu/drm/xe/xe_hw_engine_types.h
> > > > > index e4191a7a2c31..635dc3da6523 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_hw_engine_types.h
> > > > > +++ b/drivers/gpu/drm/xe/xe_hw_engine_types.h
> > > > > @@ -150,6 +150,10 @@ struct xe_hw_engine {
> > > > >  	struct xe_oa_unit *oa_unit;
> > > > >  	/** @hw_engine_group: the group of hw engines this one belongs to */
> > > > >  	struct xe_hw_engine_group *hw_engine_group;
> > > > > +	/** @pagefault_count: number of pagefaults associated with this engine */
> > > > > +	atomic_t pagefault_count;
> > > > > +	/** @reset_count: number of engine resets associated with this engine */
> > > > > +	atomic_t reset_count;
> > > > >  };
> > > > >
> > > > >  enum xe_hw_engine_snapshot_source_id {
> > > > > --
> > > > > 2.43.0
> > > > >


More information about the Intel-xe mailing list