[PATCH] drm/xe: Take ref to job and job's fence in xe_sched_job_arm

Matthew Brost matthew.brost at intel.com
Mon Sep 23 15:52:02 UTC 2024


On Mon, Sep 23, 2024 at 11:39:38AM +0100, Matthew Auld wrote:
> On 21/09/2024 02:56, Matthew Brost wrote:
> > Fixes two possible races:
> > 
> > - Submission to hardware signals job's fence before dma_fence_get at end
> >    of run_job
> > - TDR fires and signals fence + free job before run_job completes
> > 
> > Taking refs in xe_sched_job_arm to job and job's fence solves these by
> > ensure all refs collected before entering the DRM scheduler. The refs
> > are dropped in run_job and DRM scheduler respectfully. Safe as once
> > xe_sched_job_arm is called execution of job through DRM sched is
> > guaranteed.
> > 
> > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2811
> > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> > Cc: Matthew Auld <matthew.auld at intel.com>
> > Cc: <stable at vger.kernel.org> # v6.8+
> > ---
> >   drivers/gpu/drm/xe/xe_execlist.c        |  4 +++-
> >   drivers/gpu/drm/xe/xe_guc_submit.c      | 11 +++++++----
> >   drivers/gpu/drm/xe/xe_sched_job.c       |  5 ++---
> >   drivers/gpu/drm/xe/xe_sched_job_types.h |  1 -
> >   4 files changed, 12 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
> > index f3b71fe7a96d..b70706c9caf2 100644
> > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > @@ -309,11 +309,13 @@ execlist_run_job(struct drm_sched_job *drm_job)
> >   	struct xe_sched_job *job = to_xe_sched_job(drm_job);
> >   	struct xe_exec_queue *q = job->q;
> >   	struct xe_execlist_exec_queue *exl = job->q->execlist;
> > +	struct dma_fence *fence = job->fence;
> >   	q->ring_ops->emit_job(job);
> >   	xe_execlist_make_active(exl);
> > +	xe_sched_job_put(job);
> > -	return dma_fence_get(job->fence);
> > +	return fence;
> >   }
> >   static void execlist_job_free(struct drm_sched_job *drm_job)
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index fbbe6a487bbb..689279fdef80 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -766,6 +766,7 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
> >   	struct xe_guc *guc = exec_queue_to_guc(q);
> >   	struct xe_device *xe = guc_to_xe(guc);
> >   	bool lr = xe_exec_queue_is_lr(q);
> > +	struct dma_fence *fence = NULL;
> >   	xe_assert(xe, !(exec_queue_destroyed(q) || exec_queue_pending_disable(q)) ||
> >   		  exec_queue_banned(q) || exec_queue_suspended(q));
> > @@ -782,12 +783,14 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
> >   	if (lr) {
> >   		xe_sched_job_set_error(job, -EOPNOTSUPP);
> > -		return NULL;
> > -	} else if (test_and_set_bit(JOB_FLAG_SUBMIT, &job->fence->flags)) {
> > -		return job->fence;
> > +		dma_fence_put(job->fence);	/* Drop ref from xe_sched_job_arm */
> 
> Not too sure about this, is it really safe to drop the JOB_FLAG_SUBMIT
> dance? Seems like queue_run_job can be called more than once for a given
> job, according to the comment for run_job in drm sched, in which case this
> will maybe hit UAF.
> 

Ugh, your right. run_job() can be called twice... I need to rethink
this a bit.

> >   	} else {
> > -		return dma_fence_get(job->fence);
> > +		fence = job->fence;
> >   	}
> > +
> > +	xe_sched_job_put(job);	/* Pairs with get from xe_sched_job_arm */
> 
> Why do we need a ref on the job itself? free_job() looks to drop its own
> ref, are we saying that free_job() can really be run before run_job()? I
> assume really bad stuff will happen if the refcount reaches zero inside
> run_job() here? Is that impossible?
> 

This snippet here in guc_exec_queue_timedout_job can run before run_job
completes.

1089         /*
1090          * TDR has fired before free job worker. Common if exec queue
1091          * immediately closed after last fence signaled.
1092          */
1093         if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags)) {
1094                 guc_exec_queue_free_job(drm_job);
1095
1096                 return DRM_GPU_SCHED_STAT_NOMINAL;
1097         }

That is the source of the gitlab issue. Also if we ever decide to use
unordered work queue in the scheduler we'd have race there too. Take a
ref like this seems to be the safest possible way to do this.

> > +
> > +	return fence;
> >   }
> >   static void guc_exec_queue_free_job(struct drm_sched_job *drm_job)
> > diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
> > index eeccc1c318ae..d0f4b908411f 100644
> > --- a/drivers/gpu/drm/xe/xe_sched_job.c
> > +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> > @@ -280,16 +280,15 @@ void xe_sched_job_arm(struct xe_sched_job *job)
> >   		fence = &chain->base;
> >   	}
> > -	job->fence = fence;
> > +	xe_sched_job_get(job);			/* Pairs with put in run_job */
> > +	job->fence = dma_fence_get(fence);	/* Pairs with put in scheduler */
> 
> So roughly the run_job() is always run at least once, if we get as far as
> the arm, even in the case where there is some kind of error? We no longer
> grab a ref in run_job() so this should balance out, assuming its run exactly
> once.
> 

It is always called at least once. Calling twice seems to be a problem.
I think this can only happen in GT reset flows so might need to take
another ref to fence / job there. I need to think this through.

Matt

> >   	drm_sched_job_arm(&job->drm);
> >   }
> >   void xe_sched_job_push(struct xe_sched_job *job)
> >   {
> > -	xe_sched_job_get(job);
> >   	trace_xe_sched_job_exec(job);
> >   	drm_sched_entity_push_job(&job->drm);
> > -	xe_sched_job_put(job);
> >   }
> >   /**
> > diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
> > index 0d3f76fb05ce..8ed95e1a378f 100644
> > --- a/drivers/gpu/drm/xe/xe_sched_job_types.h
> > +++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
> > @@ -40,7 +40,6 @@ struct xe_sched_job {
> >   	 * @fence: dma fence to indicate completion. 1 way relationship - job
> >   	 * can safely reference fence, fence cannot safely reference job.
> >   	 */
> > -#define JOB_FLAG_SUBMIT		DMA_FENCE_FLAG_USER_BITS
> >   	struct dma_fence *fence;
> >   	/** @user_fence: write back value when BB is complete */
> >   	struct {


More information about the Intel-xe mailing list