[PATCH v5 16/20] drm/xe/svm: Add xe_svm_range_validate_and_evict() function

Ghimiray, Himal Prasad himal.prasad.ghimiray at intel.com
Wed Apr 30 03:45:43 UTC 2025



On 30-04-2025 09:04, Ghimiray, Himal Prasad wrote:
> 
> 
> On 30-04-2025 00:12, Matthew Brost wrote:
>> On Tue, Apr 29, 2025 at 04:12:29PM +0530, Himal Prasad Ghimiray wrote:
>>> The xe_svm_range_validate_and_evict() function checks if a range is
>>> valid and located in the desired memory region. Additionally, if the
>>> range is valid in VRAM but the desired region is SMEM, it evicts the
>>> ranges to SMEM.
>>>
>>> v2
>>> - Fix function stub in xe_svm.h
>>> - Fix doc
>>>
>>> v3 (Matthew Brost)
>>> - Remove extra new line
>>> - s/range->base.flags.has_devmem_pages/xe_svm_range_in_vram
>>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
>>> ---
>>>   drivers/gpu/drm/xe/xe_svm.c | 37 +++++++++++++++++++++++++++++++++++++
>>>   drivers/gpu/drm/xe/xe_svm.h | 12 ++++++++++++
>>>   2 files changed, 49 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>>> index 90fae13b77ae..55c5373b7989 100644
>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>> @@ -637,6 +637,43 @@ static bool xe_svm_range_is_valid(struct 
>>> xe_svm_range *range,
>>>           && (!devmem_only || range->base.flags.migrate_devmem);
>>>   }
>>> +/**
>>> + * xe_svm_range_validate_and_evict() - Check if the SVM range is valid
>>> + * @vm: xe_vm pointer
>>> + * @range: Pointer to the SVM range structure
>>> + * @tile_mask: Mask representing the tiles to be checked
>>> + * @devmem_only: if true range needs to be in devmem
>>> + *
>>> + * The xe_svm_range_validate_and_evict() function checks if a range is
>>> + * valid and located in the desired memory region. Additionally, if the
>>> + * range is valid in VRAM but the desired region is SMEM, it evicts the
>>> + * ranges to SMEM.
>>> + *
>>> + * Return: true if the range is valid, false otherwise
>>> + */
>>> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
>>> +                     struct xe_svm_range *range,
>>> +                     u8 tile_mask, bool devmem_only)
>>
>> s/devmem_only/devmem_preferred
> 
> Sure
> 
>>
>>> +{
>>> +    bool range_evict = false;
>>> +    bool ret;
>>> +
>>> +    xe_svm_notifier_lock(vm);
>>> +
>>> +    ret = (range->tile_present & ~range->tile_invalidated & 
>>> tile_mask) == tile_mask &&
>>> +           (devmem_only == xe_svm_range_in_vram(range));

I see xe_svm_range_in_vram is moved to using READ_ONCE in
https://patchwork.freedesktop.org/patch/650869/?series=147846&rev=5.

Since we are in agreement of using locking here, how about using
range->base.flags.has_devmem_pages instead of xe_svm_range_in_vram().


>>> +
>>> +    if (!ret && !devmem_only && xe_svm_range_in_vram(range))
>>> +        range_evict = true;
>>> +
>>> +    xe_svm_notifier_unlock(vm);
>>> +
>>> +    if (range_evict)
>>> +        drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
>>
>> Sorry missed this eariler. I think this step should be left to latter in
>> the software pipeline - e.g., In prefetch_ranges in the following patch.
>>
>> Migration are costly and this is the step we'd want to thread for
>> performancd. So if some migrations are done in vm_bind_ioctl_ops_create
>> and other in prefetch_ranges it would make the threading logic tricky
>> comapred all migrations being done in prefetch_ranges.
> 
> Agreed, will move to prefetch_ranges
> 
>>
>> Matt
>>
>>> +
>>> +    return ret;
>>> +}
>>> +
>>>   #if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
>>>   static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
>>>   {
>>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>>> index 9be7bb25725c..e6f71ad0b17b 100644
>>> --- a/drivers/gpu/drm/xe/xe_svm.h
>>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>>> @@ -83,6 +83,10 @@ int xe_svm_range_get_pages(struct xe_vm *vm, 
>>> struct xe_svm_range *range,
>>>   bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, 
>>> struct xe_vma *vma,
>>>                       bool preferred_region_is_vram);
>>> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
>>> +                     struct xe_svm_range *range,
>>> +                     u8 tile_mask, bool devmem_only);
>>> +
>>>   /**
>>>    * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>>>    * @range: SVM range
>>> @@ -276,6 +280,14 @@ bool xe_svm_range_needs_migrate_to_vram(struct 
>>> xe_svm_range *range, struct xe_vm
>>>       return false;
>>>   }
>>> +static inline
>>> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
>>> +                     struct xe_svm_range *range,
>>> +                     u8 tile_mask, bool devmem_only)
>>> +{
>>> +    return false;
>>> +}
>>> +
>>>   #define xe_svm_assert_in_notifier(...) do {} while (0)
>>>   #define xe_svm_range_has_dma_mapping(...) false
>>> -- 
>>> 2.34.1
>>>
> 



More information about the Intel-xe mailing list