[PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event

Zeng, Oak oak.zeng at intel.com
Thu Mar 21 14:56:15 UTC 2024


Hi Matt,

> -----Original Message-----
> From: Brost, Matthew <matthew.brost at intel.com>
> Sent: Wednesday, March 20, 2024 11:22 PM
> To: Zeng, Oak <oak.zeng at intel.com>
> Cc: intel-xe at lists.freedesktop.org; Hellstrom, Thomas
> <thomas.hellstrom at intel.com>; Welty, Brian <brian.welty at intel.com>; Ghimiray,
> Himal Prasad <himal.prasad.ghimiray at intel.com>; De Marchi, Lucas
> <lucas.demarchi at intel.com>
> Subject: Re: [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event
> 
> On Wed, Mar 20, 2024 at 10:29:39PM -0400, Oak Zeng wrote:
> > When there is MMU_NOTIFY_UNMAP event happens, the userptr
> > is munmapped from CPU. There is no need to keep the xe_vma
> > for this userptr from GPU side. So we destroy it.
> >
> > But we can't destroy vma directly from the mmu notifier
> > callback function, because we need to remove mmu
> > notifier during vma destroy. If we remove mmu notifier
> > directly from mmu notifier callback, it is a deadlock.
> > xe_vma_destroy is modified to destroy vma in a worker
> > thread.
> >
> > Another reason of this change is, for the future
> > hmmptr codes, we destroy vma when hmmptr is unmapped
> > from CPU. We want to unify the hmmptr and userptr
> > code.
> >
> > I believe this is also the correct behavior for userptr.
> 
> It is not, the user is still responsible for unmapping. 

Do you see a problem of the scheme I described? In that scheme, if user vm_unbind after userptr is freed/munmapped, we can return user an error "userptr is already freed". Is this a problem to you?

My thinking is, the xe_vma state for userptr depends on a valid userptr. When user free/munmap userptr, the userptr is gone, so the xe_vma should also be gone. Isn't this a general valid design principle? Xekmd itself seems apply this principle a lot. For example, when we destroy a vm, all the exec queue of this vm is destroyed, not requiring user explicitly destroy queues.


Even if this was
> the correct behavior we'd have to unmap from the GPU and update internal
> PT state before calling xe_vma_destroy.

In vma_userptr_invalidate, we called xe_vm_invalidate_vma to zap PT state. But currently it is done only for fault mode. If we use my scheme, this should be done for both fault and non-fault mode during a MMU_NOTIFY_UNMAP event.


> 
> The correct behavior in this is case just let the userptr page pin fail
> and invalidate GPU mappings. This recently got merge in [1].

This also works, but the design is not as clean as my scheme IMO. 

For hmmptr, there is no vm_bind (except the initial gigantic bind which doesn't count). It is reasonable to destroy vma when user free/munmap, otherwise we left over a lot of zombie vmas. So at least for hmmptr we should apply my scheme. Whether we should unify the scheme for both userptr and hmmptr, I think it is still debatable. 

Oak

> 
> Have a pending test for this in [2].
> 
> Matt
> 
> [1] https://patchwork.freedesktop.org/series/130935/
> [2] https://patchwork.freedesktop.org/series/131211/
> 
> > This patch is experimental for CI and open to discuss
> >
> > Signed-off-by: Oak Zeng <oak.zeng at intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_vm.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 11a4bb9d5415..90d1163c1090 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -40,6 +40,8 @@
> >  #include "xe_wa.h"
> >  #include "xe_hmm.h"
> >
> > +static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence);
> > +
> >  static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> >  {
> >  	return vm->gpuvm.r_obj;
> > @@ -604,6 +606,9 @@ static bool vma_userptr_invalidate(struct
> mmu_interval_notifier *mni,
> >
> >  	trace_xe_vma_userptr_invalidate_complete(vma);
> >
> > +	if (range->event == MMU_NOTIFY_UNMAP)
> > +		xe_vma_destroy(vma, NULL);
> > +
> >  	return true;
> >  }
> >
> > @@ -901,7 +906,8 @@ static void xe_vma_destroy(struct xe_vma *vma, struct
> dma_fence *fence)
> >  			xe_vma_destroy_late(vma);
> >  		}
> >  	} else {
> > -		xe_vma_destroy_late(vma);
> > +		INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
> > +		queue_work(system_unbound_wq, &vma->destroy_work);
> >  	}
> >  }
> >
> > --
> > 2.26.3
> >


More information about the Intel-xe mailing list