[PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event

Zeng, Oak oak.zeng at intel.com
Thu Mar 21 20:00:20 UTC 2024


Hi Matt, 

Per yours and Thomas comments, most likely we want to keep the current behavior. I will drop this patch. 

I might still need this behavior for system allocator. But that will be left over to the system allocator patches.

See some comments inline just for discussion purpose.

> -----Original Message-----
> From: Brost, Matthew <matthew.brost at intel.com>
> Sent: Thursday, March 21, 2024 1:02 PM
> To: Zeng, Oak <oak.zeng at intel.com>
> Cc: intel-xe at lists.freedesktop.org; Hellstrom, Thomas
> <thomas.hellstrom at intel.com>; Welty, Brian <brian.welty at intel.com>; Ghimiray,
> Himal Prasad <himal.prasad.ghimiray at intel.com>; De Marchi, Lucas
> <lucas.demarchi at intel.com>
> Subject: Re: [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event
> 
> On Thu, Mar 21, 2024 at 08:56:15AM -0600, Zeng, Oak wrote:
> > Hi Matt,
> >
> > > -----Original Message-----
> > > From: Brost, Matthew <matthew.brost at intel.com>
> > > Sent: Wednesday, March 20, 2024 11:22 PM
> > > To: Zeng, Oak <oak.zeng at intel.com>
> > > Cc: intel-xe at lists.freedesktop.org; Hellstrom, Thomas
> > > <thomas.hellstrom at intel.com>; Welty, Brian <brian.welty at intel.com>;
> Ghimiray,
> > > Himal Prasad <himal.prasad.ghimiray at intel.com>; De Marchi, Lucas
> > > <lucas.demarchi at intel.com>
> > > Subject: Re: [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event
> > >
> > > On Wed, Mar 20, 2024 at 10:29:39PM -0400, Oak Zeng wrote:
> > > > When there is MMU_NOTIFY_UNMAP event happens, the userptr
> > > > is munmapped from CPU. There is no need to keep the xe_vma
> > > > for this userptr from GPU side. So we destroy it.
> > > >
> > > > But we can't destroy vma directly from the mmu notifier
> > > > callback function, because we need to remove mmu
> > > > notifier during vma destroy. If we remove mmu notifier
> > > > directly from mmu notifier callback, it is a deadlock.
> > > > xe_vma_destroy is modified to destroy vma in a worker
> > > > thread.
> > > >
> > > > Another reason of this change is, for the future
> > > > hmmptr codes, we destroy vma when hmmptr is unmapped
> > > > from CPU. We want to unify the hmmptr and userptr
> > > > code.
> > > >
> > > > I believe this is also the correct behavior for userptr.
> > >
> > > It is not, the user is still responsible for unmapping.
> >
> > Do you see a problem of the scheme I described? In that scheme, if user
> vm_unbind after userptr is freed/munmapped, we can return user an error
> "userptr is already freed". Is this a problem to you?
> >
> 
> We did this until was merged - the next userpr pin pages would return
> -EFAULT. 

That is a little different than my scheme. In my scheme, the user freed userptr is destroyed completely. It won't be on any repin list - it won't be repined anymore.


Turns out the compute apps call free / munmap on userptrs with
> bindings as the time for performance reasons. Thus the [1] we just
> invalidate the userptr if we can't pin the pages.

I understand the performance consideration. But in my scheme, it doesn't have the performance problem either. My scheme allows user free without vm_unbind, with same performance perspective.

Thomas mentioned user can free the userptr, then malloc the same userptr, then use it for gpu workload. I wrote an email to confirm this behavior to UMD. If this is the case, then my scheme doesn't work.


> 
> [1] https://patchwork.freedesktop.org/series/130935/
> 
> > My thinking is, the xe_vma state for userptr depends on a valid userptr. When
> user free/munmap userptr, the userptr is gone, so the xe_vma should also be
> gone. Isn't this a general valid design principle? Xekmd itself seems apply this
> principle a lot. For example, when we destroy a vm, all the exec queue of this vm
> is destroyed, not requiring user explicitly destroy queues.
> >
> 
> Not a wrong of thinking about it but see above.
> 
> >
> > Even if this was
> > > the correct behavior we'd have to unmap from the GPU and update internal
> > > PT state before calling xe_vma_destroy.
> >
> > In vma_userptr_invalidate, we called xe_vm_invalidate_vma to zap PT state.
> But currently it is done only for fault mode. If we use my scheme, this should be
> done for both fault and non-fault mode during a MMU_NOTIFY_UNMAP event.
> >
> 
> I'm unsure where it done in this patch - xe_vma_destroy free the KMD
> state / memory for the xe_vma. It does not remove / update the GPUVM
> state, the PT state or interact with the GPU to update the page tables.

I wasn't clear. It is not in this patch yet. I meant this behavior:

Regardless it is fault or non-fault mode, if user free/unmap userptr (aka MMU_NOTIFY_UNMAP event in the mmu notifier callback), we should always:

1) update GPUVM state/ZAP PTEs
2)destroy xe_vma for the userptr, also makes sure this xe_vma won't be on any repin list..

Again, this scheme depends on UMD behavior as said above.

> 
> We only call xe_vm_invalidate_vma in notifier for fault because the
> other modes (dma-fence / preempt fence) it is call somewhere else (exec
> IOCTL / preempt rebind worker respectully). The dma-resv slots in these
> modes ensure that after vma_userptr_invalidate returns the GPU will not
> be touching the userptr. GPU exection is resumed in exec IOCTL or
> preempt rebind worker either the userptr is rebound or invalidated.
> 
> Make sense?

For non-fault mode, what we have today (with your merged patch) works for me. Let's keep the current behavior. Note my scheme would also work: in MMU_NOTIFY_UNMAP, we can do above 1) and 2) without pre-empt GPU workload - if GPU touch this userptr later, it is user program error and it is fatal. I agree we keep the current behavior 

> 
> >
> > >
> > > The correct behavior in this is case just let the userptr page pin fail
> > > and invalidate GPU mappings. This recently got merge in [1].
> >
> > This also works, but the design is not as clean as my scheme IMO.
> >
> 
> The UMDs have requested the existing behavior.

Sure, lets keep it.

> 
> > For hmmptr, there is no vm_bind (except the initial gigantic bind which doesn't
> count). It is reasonable to destroy vma when user free/munmap, otherwise we
> left over a lot of zombie vmas. So at least for hmmptr we should apply my
> scheme. Whether we should unify the scheme for both userptr and hmmptr, I
> think it is still debatable.
> >
> 
> The HMMPTR case, this concept is correct. Hence I implemented garbage
> collector in my PoC [2]. The implementation however, is not. See my
> comments above about what xe_vma_destroy does. Rather than calling that
> function - it should be addded to a list which a worker picks up and
> proper modifies all state (GPUVM, internal PT state, and GPU page
> tables) needed to destroy the VMA. The garbage collector can be thought
> od mini-IOCTL that only implements DRM_XE_VM_BIND_OP_UNMAP for the
> VMA.
> Hence in my PoC it calls the same functions the bind IOCTL calls albiet
> with slightly different arguments.
> 
> Again, does this make sense?

Yes, that make sense.

Oak

> 
> Matt
> 
> [2] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-system-allocator/-
> /commit/91cf8b38428ae9180c92435d3519193d074e837a
> 
> > Oak
> >
> > >
> > > Have a pending test for this in [2].
> > >
> > > Matt
> > >
> > > [1] https://patchwork.freedesktop.org/series/130935/
> > > [2] https://patchwork.freedesktop.org/series/131211/
> > >
> > > > This patch is experimental for CI and open to discuss
> > > >
> > > > Signed-off-by: Oak Zeng <oak.zeng at intel.com>
> > > > ---
> > > >  drivers/gpu/drm/xe/xe_vm.c | 8 +++++++-
> > > >  1 file changed, 7 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > > index 11a4bb9d5415..90d1163c1090 100644
> > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > @@ -40,6 +40,8 @@
> > > >  #include "xe_wa.h"
> > > >  #include "xe_hmm.h"
> > > >
> > > > +static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
> *fence);
> > > > +
> > > >  static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > >  {
> > > >  	return vm->gpuvm.r_obj;
> > > > @@ -604,6 +606,9 @@ static bool vma_userptr_invalidate(struct
> > > mmu_interval_notifier *mni,
> > > >
> > > >  	trace_xe_vma_userptr_invalidate_complete(vma);
> > > >
> > > > +	if (range->event == MMU_NOTIFY_UNMAP)
> > > > +		xe_vma_destroy(vma, NULL);
> > > > +
> > > >  	return true;
> > > >  }
> > > >
> > > > @@ -901,7 +906,8 @@ static void xe_vma_destroy(struct xe_vma *vma,
> struct
> > > dma_fence *fence)
> > > >  			xe_vma_destroy_late(vma);
> > > >  		}
> > > >  	} else {
> > > > -		xe_vma_destroy_late(vma);
> > > > +		INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
> > > > +		queue_work(system_unbound_wq, &vma->destroy_work);
> > > >  	}
> > > >  }
> > > >
> > > > --
> > > > 2.26.3
> > > >


More information about the Intel-xe mailing list