[PATCH 4/7] drm/xe: Relax runtime pm protection around VM

Thomas Hellström thomas.hellstrom at linux.intel.com
Mon May 13 13:16:38 UTC 2024


On Thu, 2024-05-09 at 15:48 +0000, Matthew Brost wrote:
> On Wed, May 08, 2024 at 04:07:04PM -0400, Rodrigo Vivi wrote:
> > In the regular use case scenario, user space will create a
> > VM, and keep it alive for the entire duration of its workload.
> > 
> > For the regular desktop cases, it means that the VM
> > is alive even on idle scenarios where display goes off. This
> > is unacceptable since this would entirely block runtime PM
> > indefinitely, blocking deeper Package-C state. This would be
> > a waste drainage of power.
> > 
> > Limit the VM protection solely for long-running workloads that
> > are not protected by display cases nor by the scheduler
> > references. By design, run_job for long-running workloads
> > returns NULL and the scheduler drops all the references of it,
> > hence protecting the VM for this case is necessary.
> > 
> > This indeed opens up a risk of use case without display, and
> > without long-running workload, where memory might be mapped
> > and accessed with direct read and write operations without
> > any gpu execution involved. Because of this, extra protection
> > for the special vm_op access callback.
> > 
> > In the ideal case of the mmapped scenario of vm_ops, we would
> > also get references in the 'open' and 'mmap' callbacks, and
> > put it back on the 'close' callback, for a balanced case.
> > However, this would also block the regular desktop case.
> > 
> > v2: Update commit message to a more imperative language and to
> >     reflect why the VM protection is really needed.
> >     Also add a comment in the code to let the reason visbible.
> > 
> > Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> > Cc: Lucas De Marchi <lucas.demarchi at intel.com>
> > Cc: Matthew Brost <matthew.brost at intel.com>
> > Cc: Francois Dugast <francois.dugast at intel.com>
> > Acked-by: Matthew Brost <matthew.brost at intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_bo.c | 17 ++++++++++++++++-
> >  drivers/gpu/drm/xe/xe_vm.c | 12 +++++++++---
> >  2 files changed, 25 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_bo.c
> > b/drivers/gpu/drm/xe/xe_bo.c
> > index 03f7fe7acf8c..7980efe139ed 100644
> > --- a/drivers/gpu/drm/xe/xe_bo.c
> > +++ b/drivers/gpu/drm/xe/xe_bo.c
> > @@ -1171,11 +1171,26 @@ static vm_fault_t xe_gem_fault(struct
> > vm_fault *vmf)
> >  	return ret;
> >  }
> >  
> > +static int xe_vm_access(struct vm_area_struct *vma, unsigned long
> > addr,
> > +			void *buf, int len, int write)
> > +{
> > +	struct ttm_buffer_object *tbo = vma->vm_private_data;
> > +	struct drm_device *ddev = tbo->base.dev;
> > +	struct xe_device *xe = to_xe_device(ddev);
> > +	int ret;
> > +
> > +	xe_pm_runtime_get(xe);
> > +	ret = ttm_bo_vm_access(vma, addr, buf, len, write);
> 
> Trying to understand this case. Looking at ttm_bo_vm_access it
> appears
> to be a function in which a CPU VMA is read / wrote when it has a
> backing store of a TTM BO. System an TT placement defaults to a TTM
> function while VRAM access is implemented via the access_memory vfunc
> which we do not implement in Xe. Is this something we are missing?

Yes, looks like it. It's used by access_process_vm(). It looks like
that's used by ptrace.

/Thomas

> 
> Patch itself makes sense, have a PM ref when accessing memory.
> 
> Matt
> 
> > +	xe_pm_runtime_put(xe);
> > +
> > +	return ret;
> > +}
> > +
> >  static const struct vm_operations_struct xe_gem_vm_ops = {
> >  	.fault = xe_gem_fault,
> >  	.open = ttm_bo_vm_open,
> >  	.close = ttm_bo_vm_close,
> > -	.access = ttm_bo_vm_access
> > +	.access = xe_vm_access
> >  };
> >  
> >  static const struct drm_gem_object_funcs xe_gem_object_funcs = {
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > b/drivers/gpu/drm/xe/xe_vm.c
> > index d17192c8b7de..f2915741fe16 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -1347,7 +1347,13 @@ struct xe_vm *xe_vm_create(struct xe_device
> > *xe, u32 flags)
> >  
> >  	vm->pt_ops = &xelp_pt_ops;
> >  
> > -	if (!(flags & XE_VM_FLAG_MIGRATION))
> > +	/*
> > +	 * Long-running workloads are not protected by the
> > scheduler references.
> > +	 * By design, run_job for long-running workloads returns
> > NULL and the
> > +	 * scheduler drops all the references of it, hence
> > protecting the VM
> > +	 * for this case is necessary.
> > +	 */
> > +	if (flags & XE_VM_FLAG_LR_MODE)
> >  		xe_pm_runtime_get_noresume(xe);
> >  
> >  	vm_resv_obj = drm_gpuvm_resv_object_alloc(&xe->drm);
> > @@ -1457,7 +1463,7 @@ struct xe_vm *xe_vm_create(struct xe_device
> > *xe, u32 flags)
> >  	for_each_tile(tile, xe, id)
> >  		xe_range_fence_tree_fini(&vm->rftree[id]);
> >  	kfree(vm);
> > -	if (!(flags & XE_VM_FLAG_MIGRATION))
> > +	if (flags & XE_VM_FLAG_LR_MODE)
> >  		xe_pm_runtime_put(xe);
> >  	return ERR_PTR(err);
> >  }
> > @@ -1592,7 +1598,7 @@ static void vm_destroy_work_func(struct
> > work_struct *w)
> >  
> >  	mutex_destroy(&vm->snap_mutex);
> >  
> > -	if (!(vm->flags & XE_VM_FLAG_MIGRATION))
> > +	if (vm->flags & XE_VM_FLAG_LR_MODE)
> >  		xe_pm_runtime_put(xe);
> >  
> >  	for_each_tile(tile, xe, id)
> > -- 
> > 2.44.0
> > 



More information about the Intel-xe mailing list