[RFC 26/34] drm/xe: VMs don't need the mem_access protection anymore

Matthew Auld matthew.auld at intel.com
Mon Feb 5 13:29:57 UTC 2024


On 26/01/2024 20:30, Rodrigo Vivi wrote:
> All the VM operations are now protected or by the IOCTL
> calls, or by sched/exec operations or by the specific
> xe_pm_runtime_{get,put} around the duration of the LR VMs.
> 
> So, these inner mem_access protections can be removed.

Would this be a good place to split the series? AFAICT a lot of the 
nightmares originate from here. Like first part of the series moves the 
rpm to be the outermost thing, but we still keep the important vm rpm 
protection here, so shouldn't need the CT, sched stuff etc yet. And in 
theory there is still a path to d3cold without this patch, I think. Also 
playing devils advocate, since we already end up having rpm protection 
for LR vm, why not normal vm. Is it worth it?

> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
> ---
>   drivers/gpu/drm/xe/xe_vm.c | 7 -------
>   1 file changed, 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 9ad154a01ad7..f314ff128028 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1280,9 +1280,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   
>   	vm->pt_ops = &xelp_pt_ops;
>   
> -	if (!(flags & XE_VM_FLAG_MIGRATION))
> -		xe_device_mem_access_get(xe);
> -
>   	vm_resv_obj = drm_gpuvm_resv_object_alloc(&xe->drm);
>   	if (!vm_resv_obj) {
>   		err = -ENOMEM;
> @@ -1391,8 +1388,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   	for_each_tile(tile, xe, id)
>   		xe_range_fence_tree_fini(&vm->rftree[id]);
>   	kfree(vm);
> -	if (!(flags & XE_VM_FLAG_MIGRATION))
> -		xe_device_mem_access_put(xe);
>   	return ERR_PTR(err);
>   }
>   
> @@ -1515,8 +1510,6 @@ static void vm_destroy_work_func(struct work_struct *w)
>   	xe_assert(xe, !vm->size);
>   
>   	if (!(vm->flags & XE_VM_FLAG_MIGRATION)) {
> -		xe_device_mem_access_put(xe);
> -
>   		if (xe->info.has_asid && vm->usm.asid) {
>   			mutex_lock(&xe->usm.lock);
>   			lookup = xa_erase(&xe->usm.asid_to_vm, vm->usm.asid);


More information about the Intel-xe mailing list