[PATCH v5 08/23] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise

Ghimiray, Himal Prasad himal.prasad.ghimiray at intel.com
Tue Jul 29 07:42:09 UTC 2025



On 29-07-2025 09:10, Matthew Brost wrote:
> On Tue, Jul 22, 2025 at 07:05:11PM +0530, Himal Prasad Ghimiray wrote:
>> In the case of the MADVISE ioctl, if the start or end addresses fall
>> within a VMA and existing SVM ranges are present, remove the existing
>> SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
>> unmapping of old one.
>>
>> v2 (Matthew Brost)
>> - Use vops flag to call unmapping of ranges in vm_bind_ioctl_ops_parse
>> - Rename the function
>>
>> v3
>> - Fix doc
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_svm.c | 28 ++++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_svm.h |  7 +++++++
>>   drivers/gpu/drm/xe/xe_vm.c  |  8 ++++++--
>>   3 files changed, 41 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index a7ff5975873f..ce8a71b80811 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -933,6 +933,34 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
>>   	return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
>>   }
>>   
>> +/**
>> + * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
>> + * @vm: The VM
>> + * @start: start addr
>> + * @end: end addr
>> + *
>> + * This function UNMAPS svm ranges if start or end address are inside them.
>> + */
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> +	struct drm_gpusvm_notifier *notifier, *next;
>> +
>> +	lockdep_assert_held_write(&vm->lock);
>> +
>> +	drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
>> +		struct drm_gpusvm_range *range, *__next;
>> +
>> +		drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
>> +			if (start > drm_gpusvm_range_start(range) ||
>> +			    end < drm_gpusvm_range_end(range)) {
>> +				if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
>> +					drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
>> +				__xe_svm_garbage_collector(vm, to_xe_range(range));
> 
> There is a corner here - the range could be in the garbage collector
> list...
> 
> I think to fix you have to do this:
> 
> drm_gpusvm_range_get(range);
> __xe_svm_garbage_collector(vm, to_xe_range(range));
> if (!list_empty(&to_xe_range(range)->garbage_collector_link)) {
> 	spin_lock(&vm->svm.garbage_collector.list_lock);
> 	list_del(&to_xe_range(range)->garbage_collector_link);	
> 	spin_unlock(&vm->svm.garbage_collector.list_lock);
> }
> drm_gpusvm_range_put(range);
> 
> A little convoluted as it is only safe to check if the range is in the
> garbage collector list after it has been removed from the notifier,
> hence the need for extra ref counting here.

Makes sense, will update in next version.
  >
> Also I believe this code path will need an IGT specifically to test this
> code path.

Its part of plan and being tested by me, just instead of 1st fault, I am 
doing prefetch which populates 2 MiB range.

> 
> Roughly...
> 
> buf = aligned_alloc(SZ_2M, SZ_2M);
> fault_in_buf_on_gpu();
> madvise(buf, SZ_1M, some attribute);
> fault_in_buf_on_gpu();	/* Ideally showing different behavior between 2 chunks */
> read_buf_back_via_cpu();

Thanks.

> 
> Matt
> 
>> +			}
>> +		}
>> +	}
>> +}
>> +
>>   /**
>>    * xe_svm_bo_evict() - SVM evict BO to system memory
>>    * @bo: BO to evict
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index da9a69ea0bb1..754d56b4d255 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -90,6 +90,8 @@ bool xe_svm_range_validate(struct xe_vm *vm,
>>   
>>   u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end,  struct xe_vma *vma);
>>   
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
>> +
>>   /**
>>    * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>>    * @range: SVM range
>> @@ -303,6 +305,11 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vm
>>   	return ULONG_MAX;
>>   }
>>   
>> +static inline
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> +}
>> +
>>   #define xe_svm_assert_in_notifier(...) do {} while (0)
>>   #define xe_svm_range_has_dma_mapping(...) false
>>   
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index a56384325f4d..7f3d0ad04b3f 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2663,8 +2663,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>   				end = op->base.remap.next->va.addr;
>>   
>>   			if (xe_vma_is_cpu_addr_mirror(old) &&
>> -			    xe_svm_has_mapping(vm, start, end))
>> -				return -EBUSY;
>> +			    xe_svm_has_mapping(vm, start, end)) {
>> +				if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
>> +					xe_svm_unmap_address_range(vm, start, end);
>> +				else
>> +					return -EBUSY;
>> +			}
>>   
>>   			op->remap.start = xe_vma_start(old);
>>   			op->remap.range = xe_vma_size(old);
>> -- 
>> 2.34.1
>>



More information about the Intel-xe mailing list