[PATCH v2] drm/xe/eudebug: implement userptr_vma access
Cavitt, Jonathan
jonathan.cavitt at intel.com
Mon Aug 5 19:20:02 UTC 2024
-----Original Message-----
From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of Andrzej Hajda
Sent: Monday, August 5, 2024 9:54 AM
To: intel-xe at lists.freedesktop.org; Brost, Matthew <matthew.brost at intel.com>
Cc: Hajda, Andrzej <andrzej.hajda at intel.com>; Mika Kuoppala <mika.kuoppala at linux.intel.com>; Patelczyk, Maciej <maciej.patelczyk at intel.com>
Subject: [PATCH v2] drm/xe/eudebug: implement userptr_vma access
>
> Debugger needs to read/write program's vmas including userptr_vma.
> Since hmm_range_fault is used to pin userptr vmas, it is possible
> to map those vmas from debugger context.
>
> v2: kmap to kmap_local (Maciej)
> v3: simplified locking, moved to xe_vm.c (Matthew)
I get why it was requested to move the xe_uvma_access code from
xe_eudebug.c to xe_vm.c, but if I may ask, do we have any plans
for using the new functionality outside of the eudebugger? It
might be good to add some additional use cases while we're
implementing the function for eudebug. Though, I guess at that
point we'd want to implement the xe_uvma_access code in
drm-xe-next first before fixing up xe_eudebug_vma_access to
support userptr vma access.
I personally won't block on it, though it might come up when
upstreaming this.
Reviewed-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
-Jonathan Cavitt
>
> Signed-off-by: Andrzej Hajda <andrzej.hajda at intel.com>
> Signed-off-by: Maciej Patelczyk <maciej.patelczyk at intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_eudebug.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.c | 47 +++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 3 +++
> 3 files changed, 51 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
> index 62be0879e651..baed2812438a 100644
> --- a/drivers/gpu/drm/xe/xe_eudebug.c
> +++ b/drivers/gpu/drm/xe/xe_eudebug.c
> @@ -2905,7 +2905,7 @@ static int xe_eudebug_vma_access(struct xe_vma *vma, u64 offset,
> if (bo)
> return xe_eudebug_bovma_access(bo, offset, buf, bytes, write);
>
> - return -EOPNOTSUPP;
> + return xe_uvma_access(to_userptr_vma(vma), offset, buf, bytes, write);
> }
>
> static int xe_eudebug_vm_access(struct xe_vm *vm, u64 offset,
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index b117a892e386..f1aadd5068a6 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3429,3 +3429,50 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> }
> kvfree(snap);
> }
> +
> +int xe_uvma_access(struct xe_userptr_vma *uvma, u64 offset,
> + void *buf, u64 len, bool write)
> +{
> + struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> + struct xe_userptr *up = &uvma->userptr;
> + struct xe_res_cursor cur = {};
> + int cur_len, ret = 0;
> +
> + while (true) {
> + down_read(&vm->userptr.notifier_lock);
> + if (!xe_vma_userptr_check_repin(uvma))
> + break;
> +
> + spin_lock(&vm->userptr.invalidated_lock);
> + list_del_init(&uvma->userptr.invalidate_link);
> + spin_unlock(&vm->userptr.invalidated_lock);
> +
> + up_read(&vm->userptr.notifier_lock);
> + ret = xe_vma_userptr_pin_pages(uvma);
> + if (ret)
> + return ret;
> + }
> +
> + if (!up->sg) {
> + ret = -EINVAL;
> + goto out_unlock_notifier;
> + }
> +
> + for (xe_res_first_sg(up->sg, offset, len, &cur); cur.remaining;
> + xe_res_next(&cur, cur_len)) {
> + void *ptr = kmap_local_page(sg_page(cur.sgl)) + cur.start;
> +
> + cur_len = min(cur.size, cur.remaining);
> + if (write)
> + memcpy(ptr, buf, cur_len);
> + else
> + memcpy(buf, ptr, cur_len);
> + kunmap_local(ptr);
> + buf += cur_len;
> + }
> + ret = len;
> +
> +out_unlock_notifier:
> + up_read(&vm->userptr.notifier_lock);
> + return ret;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index c864dba35e1d..99b9a9b011de 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -281,3 +281,6 @@ struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
> void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
> void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
> void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> +
> +int xe_uvma_access(struct xe_userptr_vma *uvma, u64 offset,
> + void *buf, u64 len, bool write);
> --
> 2.34.1
>
>
More information about the Intel-xe
mailing list