[PATCH 2/2] drm/xe: Don't free job in TDR

Matthew Brost matthew.brost at intel.com
Thu Oct 3 14:37:07 UTC 2024


On Thu, Oct 03, 2024 at 03:15:02PM +0100, Matthew Auld wrote:
> On 03/10/2024 15:05, Matthew Brost wrote:
> > On Thu, Oct 03, 2024 at 08:06:24AM +0100, Matthew Auld wrote:
> > > On 03/10/2024 01:16, Matthew Brost wrote:
> > > > Freeing job in TDR is not safe as TDR can pass the run_job thread
> > > > resulting in UAF. It is only safe for free job to naturally be called by
> > > > the scheduler. Rather free job in TDR, add to pending list.
> > > 
> > > s/Rather free/Rather than free/
> > > ?
> > > 
> > 
> > Yes, will fix.
> > 
> > > > 
> > > > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2811
> > > > Cc: Matthew Auld <matthew.auld at intel.com>
> > > > Fixes: e275d61c5f3f ("drm/xe/guc: Handle timing out of signaled jobs gracefully")
> > > > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > > 
> > > I think we still have the other issue with fence signalling in run_job.
> > > 
> > 
> > I think this actually ok given free_job as owns a ref to job->fence and
> > free_job now must run after run_job - that is why I didn't include this
> > change in this patch. But I also agree a better design would be move the
> > dma_fence_get from run_job to arm - I will do that in a follow up.
> 
> Here I mean the race in run_job() itself, before we hand over the fence to
> the scheduler. i.e do the dma_fence_get() before the submission part like
> in: https://patchwork.freedesktop.org/patch/615249/?series=138921&rev=1.
> 

Yes, we ae talking about the same thing. I think this as is safe because
in run_job we know at least 1 ref is still held by free_job which cannot
be run until after run_job completes.

Your patch is similar to what I suggest, but I think the cleanest
implementation of this is move the dma_fence_get from run_job to
xe_sched_job_arm which I'd like to do in a follow up.

Matt

> > 
> > Matt
> > 
> > > Reviewed-by: Matthew Auld <matthew.auld at intel.com>
> > > 
> > > > ---
> > > >    drivers/gpu/drm/xe/xe_guc_submit.c | 7 +++++--
> > > >    1 file changed, 5 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > index 80062e1d3f66..9ecd1661c1b5 100644
> > > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > @@ -1106,10 +1106,13 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > > >    	/*
> > > >    	 * TDR has fired before free job worker. Common if exec queue
> > > > -	 * immediately closed after last fence signaled.
> > > > +	 * immediately closed after last fence signaled. Add back to pending
> > > > +	 * list so job can be freed and kick scheduler ensuring free job is not
> > > > +	 * lost.
> > > >    	 */
> > > >    	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags)) {
> > > > -		guc_exec_queue_free_job(drm_job);
> > > > +		xe_sched_add_pending_job(sched, job);
> > > > +		xe_sched_submission_start(sched);
> > > >    		return DRM_GPU_SCHED_STAT_NOMINAL;
> > > >    	}


More information about the Intel-xe mailing list