[PATCH v4 16/18] drm/xe/svm: Add xe_svm_range_validate_and_evict() function
Matthew Brost
matthew.brost at intel.com
Mon Apr 28 21:19:07 UTC 2025
On Mon, Apr 28, 2025 at 04:02:22PM +0530, Himal Prasad Ghimiray wrote:
> The xe_svm_range_validate_and_evict() function checks if a range is
> valid and located in the desired memory region. Additionally, if the
> range is valid in VRAM but the desired region is SMEM, it evicts the
> ranges to SMEM.
>
> v2
> - Fix function stub in xe_svm.h
> - Fix doc
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 37 +++++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_svm.h | 12 ++++++++++++
> 2 files changed, 49 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 90fae13b77ae..0e672e1e68c3 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -637,6 +637,43 @@ static bool xe_svm_range_is_valid(struct xe_svm_range *range,
> && (!devmem_only || range->base.flags.migrate_devmem);
> }
>
> +/**
> + * xe_svm_range_validate_and_evict() - Check if the SVM range is valid
> + * @vm: xe_vm pointer
> + * @range: Pointer to the SVM range structure
> + * @tile_mask: Mask representing the tiles to be checked
> + * @devmem_only: if true range needs to be in devmem
> + *
> + * The xe_svm_range_validate_and_evict() function checks if a range is
> + * valid and located in the desired memory region. Additionally, if the
> + * range is valid in VRAM but the desired region is SMEM, it evicts the
> + * ranges to SMEM.
> + *
> + * Return: true if the range is valid, false otherwise
> + */
> +
Extra newline.
Again this will need to be refactored for multi-GPU but fine for now.
> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
> + struct xe_svm_range *range,
> + u8 tile_mask, bool devmem_only)
> +{
> + bool range_evict = false;
> + bool ret;
> +
> + xe_svm_notifier_lock(vm);
> +
I think Thomas and I landed on we are ok with opportunistic checks
without the notifier lock here [1]. I think we can follow that here -
use READ_ONCE where appropriate, document that the checks are
opportunistic.
[1] https://patchwork.freedesktop.org/patch/649590/?series=147846&rev=4#comment_1189676
> + ret = (range->tile_present & ~range->tile_invalidated & tile_mask) == tile_mask &&
> + (devmem_only == range->base.flags.has_devmem_pages);
> +
> + if (!ret && !devmem_only && range->base.flags.has_devmem_pages)
s/range->base.flags.has_devmem_pages/xe_svm_range_in_vram
Matt
> + range_evict = true;
> +
> + xe_svm_notifier_unlock(vm);
> +
> + if (range_evict)
> + drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
> + return ret;
> +}
> +
> #if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
> static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
> {
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index 9be7bb25725c..e6f71ad0b17b 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -83,6 +83,10 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> bool preferred_region_is_vram);
>
> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
> + struct xe_svm_range *range,
> + u8 tile_mask, bool devmem_only);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -276,6 +280,14 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> return false;
> }
>
> +static inline
> +bool xe_svm_range_validate_and_evict(struct xe_vm *vm,
> + struct xe_svm_range *range,
> + u8 tile_mask, bool devmem_only)
> +{
> + return false;
> +}
> +
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> --
> 2.34.1
>
More information about the Intel-xe
mailing list