[PATCH v2 14/29] drm/xe: Do not allow system allocator VMA unbind if the GPU has bindings
Thomas Hellström
thomas.hellstrom at linux.intel.com
Tue Nov 19 16:33:11 UTC 2024
On Tue, 2024-10-15 at 20:25 -0700, Matthew Brost wrote:
> uAPI is designed with the the use case that only mapping a BO to a
> malloc'd address will unbind a system allocator VMA. Thus it doesn't
> make tons of sense to allow a system allocator VMA unbind if the GPU
> has
> bindings in the range being unbound. Do not support this as it
> simplifies the code. Can always be revisited if a use case for this
> arrises.
s/arrises/arises
I think a uAPI without special cases like this would be ideal, what is
the code simplification, given that we already support this implicitly?
Thanks,
/Thomas
>
> Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 5 +++++
> drivers/gpu/drm/xe/xe_svm.h | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 16 ++++++++++++++++
> 3 files changed, 22 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index 0762126f65e0..1d8021b4e2f0 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -378,3 +378,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
> struct xe_vma *vma,
> err_out:
> return err;
> }
> +
> +bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
> +{
> + return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_svm.h
> b/drivers/gpu/drm/xe/xe_svm.h
> index 06d90d0f71a6..472fbc51f30e 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -29,6 +29,7 @@ void xe_svm_close(struct xe_vm *vm);
> int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_tile *tile, u64 fault_addr,
> bool atomic);
> +bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end);
>
> static inline bool xe_svm_range_pages_valid(struct xe_svm_range
> *range)
> {
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 76a20e96084e..158fbb1c3f28 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2348,6 +2348,17 @@ static int vm_bind_ioctl_ops_parse(struct
> xe_vm *vm, struct drm_gpuva_ops *ops,
> struct xe_vma *old =
> gpuva_to_vma(op->base.remap.unmap-
> >va);
> bool skip = xe_vma_is_system_allocator(old);
> + u64 start = xe_vma_start(old), end =
> xe_vma_end(old);
> +
> + if (op->base.remap.prev)
> + start = op->base.remap.prev->va.addr
> +
> + op->base.remap.prev-
> >va.range;
> + if (op->base.remap.next)
> + end = op->base.remap.next->va.addr;
> +
> + if (xe_vma_is_system_allocator(old) &&
> + xe_svm_has_mapping(vm, start, end))
> + return -EBUSY;
>
> op->remap.start = xe_vma_start(old);
> op->remap.range = xe_vma_size(old);
> @@ -2430,6 +2441,11 @@ static int vm_bind_ioctl_ops_parse(struct
> xe_vm *vm, struct drm_gpuva_ops *ops,
> {
> struct xe_vma *vma = gpuva_to_vma(op-
> >base.unmap.va);
>
> + if (xe_vma_is_system_allocator(vma) &&
> + xe_svm_has_mapping(vm,
> xe_vma_start(vma),
> + xe_vma_end(vma)))
> + return -EBUSY;
> +
> if (!xe_vma_is_system_allocator(vma))
> xe_vma_ops_incr_pt_update_ops(vops,
> op->tile_mask);
> break;
More information about the dri-devel
mailing list