[PATCH] drm/xe: Unlink client during vm close
Matthew Brost
matthew.brost at intel.com
Fri Jul 19 07:08:09 UTC 2024
On Fri, Jul 19, 2024 at 06:52:15AM +0000, Matthew Brost wrote:
> On Thu, Jul 18, 2024 at 11:08:42PM -0600, Upadhyay, Tejas wrote:
> >
> >
> > > -----Original Message-----
> > > From: Brost, Matthew <matthew.brost at intel.com>
> > > Sent: Thursday, July 18, 2024 9:28 PM
> > > To: Upadhyay, Tejas <tejas.upadhyay at intel.com>
> > > Cc: intel-xe at lists.freedesktop.org
> > > Subject: Re: [PATCH] drm/xe: Unlink client during vm close
> > >
> > > On Thu, Jul 18, 2024 at 06:47:52PM +0530, Tejas Upadhyay wrote:
> > > > We have async call which does not know if client unlinked from vm by
> > > > the time it is accessed. Set client unlink early during xe_vm_close()
> > > > so that async API do not touch closed client info.
> > > >
> > > > Also, debugs related to job timeout is not useful when its "no
> > > > process" or client already unlinked.
> > > >
> > >
> > > It kernel exec queue timeout jobs, now the 'Timedout job' message will not
> > > be displayed which is not ideal.
> > >
> > > > Fixes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2273
> > >
> > > Where is exactly is this access coming from?
> > > BUG: kernel NULL pointer dereference, address: 0000000000000058
> >
> > In guc_exec_queue_timedout_job() accessing "q->vm->xef->drm" after client closed fd causing crash. We cant take ref and keep client awake till jobs timedout is what I thought.
> >
>
> Taking ref to q->vm->xef is exactly what Umesh's series [1] here is
> doing. I believe this is the correct behavior and based on you comment
> above, I also I believe it will fix this issue. Please test with this
> series. Hopefully Umesh gets this in soon.
>
> [1] https://patchwork.freedesktop.org/series/135865/
>
> > >
> > > Also btw, the correct tag for gitlab link is 'Closes', "Fixes' is the offending
> > > kernel patch so the fixe can be pulled into stable kernels.
> >
> > Ok
> >
> > >
> > > > Signed-off-by: Tejas Upadhyay <tejas.upadhyay at intel.com>
> > > > ---
> > > > drivers/gpu/drm/xe/xe_guc_submit.c | 7 ++++---
> > > > drivers/gpu/drm/xe/xe_vm.c | 1 +
> > > > 2 files changed, 5 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > index 860405527115..1de141cb84c6 100644
> > > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > @@ -1166,10 +1166,11 @@ guc_exec_queue_timedout_job(struct
> > > drm_sched_job *drm_job)
> > > > process_name = task->comm;
> > > > pid = task->pid;
> > > > }
> > > > + xe_gt_notice(guc_to_gt(guc), "Timedout job: seqno=%u,
> > > lrc_seqno=%u, guc_id=%d, flags=0x%lx in %s [%d]",
> > > > + xe_sched_job_seqno(job),
> > > xe_sched_job_lrc_seqno(job),
> > > > + q->guc->id, q->flags, process_name, pid);
> > > > }
> > > > - xe_gt_notice(guc_to_gt(guc), "Timedout job: seqno=%u,
> > > lrc_seqno=%u, guc_id=%d, flags=0x%lx in %s [%d]",
> > > > - xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
> > > > - q->guc->id, q->flags, process_name, pid);
> > > > +
> > > > if (task)
> > > > put_task_struct(task);
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > > index cf3aea5d8cdc..660b20e0e207 100644
> > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > @@ -1537,6 +1537,7 @@ static void xe_vm_close(struct xe_vm *vm) {
> > > > down_write(&vm->lock);
> > > > vm->size = 0;
> > > > + vm->xef = NULL;
> > >
> > > This doesn't appear to be thread safe.
> >
> > Would you please elaborate!
> >
>
> Sure.
>
> vm->xef is to NULL under vm->lock in write while
> guc_exec_queue_timedout_job doesn't hold the lock so the two can race.
> If you wanted to be thread safe, the latter would at least need vm->lock
> in read mode.
>
Let me be a little more clear here than my first reply.
Here is code you have xe_vm_close when a file is closing, I'll call this 'Thread A'
down_write(&vm->lock);
...
vm->xef = NULL;
...
up_write(&vm->lock);
...
some time later vm->xref memory is freed
Look at the code in here guc_exec_queue_timed_job, I'll call this 'Thread B'.
1163 if (q->vm && q->vm->xef) {
1164 task = get_pid_task(q->vm->xef->drm->pid, PIDTYPE_PID);
1165 if (task) {
1166 process_name = task->comm;
1167 pid = task->pid;
1168 }
1169 }
- 'Thread B' is executing and if statement on line 1163 returns true,
before line 1164 is executing the thread is interrupted.
- While interrupted 'Thread A' executes, vm->xref is set to NULL and
xref is freed.
- When 'Thread B' resumes execution, q->vm->xef is deferenced and BOOM
NULL pointer dereference
Thus the only way to lines between 1163-1164 safe would hold vm->lock in
at least read mode. This would prevent 'Thread A' from executing while
'Thread B' was interrupted before line 1164 is executed.
Hope this make sense.
Matt
> Anyways this patch is likely not needed based on my feedback above.
>
> Matt
>
> > Thanks,
> > Tejas
> > >
> > > Matt
> > >
> > > > up_write(&vm->lock);
> > > > }
> > > >
> > > > --
> > > > 2.25.1
> > > >
More information about the Intel-xe
mailing list