[PATCH 2/3] drm/xe: Clear scratch page before vm_bind

Matthew Brost matthew.brost at intel.com
Thu Jan 30 01:36:35 UTC 2025


On Wed, Jan 29, 2025 at 02:01:38PM -0700, Zeng, Oak wrote:
> 
> 
> > -----Original Message-----
> > From: Brost, Matthew <matthew.brost at intel.com>
> > Sent: January 28, 2025 6:19 PM
> > To: Zeng, Oak <oak.zeng at intel.com>
> > Cc: intel-xe at lists.freedesktop.org; joonas.lahtinen at linux.intel.com;
> > Thomas.Hellstrom at linux.intel.com
> > Subject: Re: [PATCH 2/3] drm/xe: Clear scratch page before vm_bind
> > 
> > On Tue, Jan 28, 2025 at 05:21:44PM -0500, Oak Zeng wrote:
> > > When a vm runs under fault mode, if scratch page is enabled, we
> > need
> > > to clear the scratch page mapping before vm_bind for the vm_bind
> > > address range. Under fault mode, we depend on recoverable page
> > fault
> > > to establish mapping in page table. If scratch page is not cleared,
> > > GPU access of address won't cause page fault because it always hits
> > > the existing scratch page mapping.
> > >
> > > When vm_bind with IMMEDIATE flag, there is no need of clearing as
> > > immediate bind can overwrite the scratch page mapping.
> > >
> > > So far only is xe2 and xe3 products are allowed to enable scratch
> > page
> > > under fault mode. On other platform we don't allow scratch page
> > under
> > > fault mode, so no need of such clearing.
> > >
> > > Signed-off-by: Oak Zeng <oak.zeng at intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_vm.c | 32
> > ++++++++++++++++++++++++++++++++
> > >  1 file changed, 32 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > b/drivers/gpu/drm/xe/xe_vm.c
> > > index 690330352d4c..196d347c6ac0 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -38,6 +38,7 @@
> > >  #include "xe_trace_bo.h"
> > >  #include "xe_wa.h"
> > >  #include "xe_hmm.h"
> > > +#include "i915_drv.h"
> > >
> > >  static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > >  {
> > > @@ -2917,6 +2918,34 @@ static int
> > xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo
> > *bo,
> > >  	return 0;
> > >  }
> > >
> > > +static bool __xe_vm_needs_clear_scratch_pages(struct xe_device
> > *xe,
> > > +					      struct xe_vm *vm, u32
> > bind_flags)
> > > +{
> > > +	if (!xe_vm_in_fault_mode(vm))
> > > +		return false;
> > > +
> > > +	if (!xe_vm_has_scratch(vm))
> > > +		return false;
> > > +
> > > +	if (bind_flags & DRM_XE_VM_BIND_FLAG_IMMEDIATE)
> > > +		return false;
> > > +
> > > +	if (!(IS_LUNARLAKE(xe) || IS_BATTLEMAGE(xe) ||
> > IS_PANTHERLAKE(xe)))
> > > +		return false;
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +static void __xe_vm_clear_scratch_pages(struct xe_device *xe,
> > struct xe_vm *vm,
> > > +					u64 start, u64 end)
> > > +{
> > > +	struct xe_tile *tile;
> > > +	u8 id;
> > > +
> > > +	for_each_tile(tile, xe, id)
> > > +		xe_pt_zap_range(tile, vm, start, end);
> > > +}
> > > +
> > >  int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct
> > drm_file *file)
> > >  {
> > >  	struct xe_device *xe = to_xe_device(dev);
> > > @@ -3062,6 +3091,9 @@ int xe_vm_bind_ioctl(struct drm_device
> > *dev, void *data, struct drm_file *file)
> > >  		u32 prefetch_region =
> > bind_ops[i].prefetch_mem_region_instance;
> > >  		u16 pat_index = bind_ops[i].pat_index;
> > >
> > > +		if (__xe_vm_needs_clear_scratch_pages(xe, vm,
> > flags))
> > > +			__xe_vm_clear_scratch_pages(xe, vm, addr,
> > addr + range);
> > 
> > A few things...
> > 
> > - I believe this is only needed for bind user operations or internal MAP
> >   GPU VMA operations.
> 
> Did you mean this is only needed for user bind, but not for internal map?
>

Needed for user bind and also for internal MAP. With updating the bind
pipeline you will only care MAP ops.
 
> > - I believe a TLB invalidation will be required.
> 
> My understanding is, NULL pte won't be cached in TLB, so TLB invalidation is not needed. 
> 

I would think NULL ptes are cached but not certain either way.

> > - I don't think calling zap PTEs range works here, given how the scratch
> >   tables are set up (i.e., new PTEs need to be created pointing to an
> >   invalid state).
> 
> You are right. 
> 
> I didn't realize that using xe_pt_walk_shared to walk and zap PTEs has a limitation:
> For the virtual address range we want to zap, all the page tables has to be already
> exist. This interface doesn't create new page tables. Even though xe_pt_walk_shared
> takes a range parameter (addr, end), range parameter can't be arbitrary.
> 
> Today only [xe_vma_start, xe_vma_end) is used to specify the xe_pt_walk_shared
> Walking range. Arbitrary range won't work as you pointed out. To me this is a small
> Interface design issue. If you agree, I can re-parameterize xe_pt_walk_shared to take
> VMA vs addr/end. This way people won't make the same mistake in the future.
> 

Ah, no. SVM will use ranges so I think (addr, end) are the right
parameters for internal PT functions.

> Anyway, I will follow the direction you give below to rework this series.
> 

+1

Matt

> Oak
> 
> > - This series appears to be untested based on the points above.
> > 
> > Therefore, instead of this series, I believe you will need to fully
> > update the bind pipeline to process MAP GPU VMA operations here.
> > 
> > So roughly...
> > 
> > - Maybe include a bit in xe_vma_op_map that specifies "invalidate on
> >   bind," set in vm_bind_ioctl_ops_create, since this will need to be
> >   wired throughout the bind pipeline.
> > - Don't validate backing memory in this case.
> > - Ensure that xe_vma_ops_incr_pt_update_ops is called in this case
> > for
> >   MAP operations, forcing entry into the xe_pt.c backend.
> > - Update xe_pt_stage_bind_walk with a variable that indicates
> > clearing
> >   the PTE. Instead of calling pte_encode_vma in
> > xe_pt_stage_bind_entry,
> >   set this variable for PT bind operations derived from MAP operations
> >   that meet the "invalidate on bind" condition.
> > - Ensure needs_invalidation is set in struct
> > xe_vm_pgtable_update_ops if
> >   a MAP operation is included that meets the "invalidate on bind"
> >   condition.
> > - Set the VMA tile_invalidated in addition to tile_present for MAP
> >   operations that meet the "invalidate on bind" condition.
> > 
> > I might be missing some implementation details mentioned above,
> > but this
> > should provide you with some direction.
> > 
> > Lastly, and perhaps most importantly, please test this using an IGT
> > and
> > include the results in the next post.
> > 
> > Matt
> > 
> > > +
> > >  		ops[i] = vm_bind_ioctl_ops_create(vm, bos[i],
> > obj_offset,
> > >  						  addr, range, op, flags,
> > >  						  prefetch_region,
> > pat_index);
> > > --
> > > 2.26.3
> > >


More information about the Intel-xe mailing list