[PATCH 14/26] drm/xe/eudebug: implement userptr_vma access
Thomas Hellström
thomas.hellstrom at linux.intel.com
Thu Dec 12 08:49:24 UTC 2024
On Mon, 2024-12-09 at 16:31 +0100, Simona Vetter wrote:
> On Mon, Dec 09, 2024 at 03:03:04PM +0100, Christian König wrote:
> > Am 09.12.24 um 14:33 schrieb Mika Kuoppala:
> > > From: Andrzej Hajda <andrzej.hajda at intel.com>
> > >
> > > Debugger needs to read/write program's vmas including
> > > userptr_vma.
> > > Since hmm_range_fault is used to pin userptr vmas, it is possible
> > > to map those vmas from debugger context.
> >
> > Oh, this implementation is extremely questionable as well. Adding
> > the LKML
> > and the MM list as well.
> >
> > First of all hmm_range_fault() does *not* pin anything!
> >
> > In other words you don't have a page reference when the function
> > returns,
> > but rather just a sequence number you can check for modifications.
>
> I think it's all there, holds the invalidation lock during the
> critical
> access/section, drops it when reacquiring pages, retries until it
> works.
>
> I think the issue is more that everyone hand-rolls userptr. Probably
> time
> we standardize that and put it into gpuvm as an optional part, with
> consistent locking, naming (like not calling it _pin_pages when it's
> unpinnged userptr), kerneldoc and all the nice things so that we
> stop consistently getting confused by other driver's userptr code.
>
> I think that was on the plan originally as an eventual step, I guess
> time
> to pump that up. Matt/Thomas, thoughts?
It looks like we have this planned and ongoing but there are some
complications and thoughts.
1) A drm_gpuvm implementation would be based on vma userptrs, and would
be pretty straightforward based on xe's current implementation and, as
you say, renaming.
2) Current Intel work to land this on the drm level is based on
drm_gpusvm (minus migration to VRAM). I'm not fully sure yet how this
will integrate with drm_gpuvm.
3) Christian mentioned a plan to have a common userptr implementation
based off drm_exec. I figure that would be bo-based like the amdgpu
implemeentation still is. Possibly i915 would be interested in this but
I think any VM_BIND based driver would want to use drm_gpuvm /
drm_gpusvm implementation, which is also typically O(1), since userptrs
are considered vm-local.
Ideas / suggestions welcome
> -Sima
>
> >
> > > v2: pin pages vs notifier, move to vm.c (Matthew)
> > > v3: - iterate over system pages instead of DMA, fixes iommu
> > > enabled
> > > - s/xe_uvma_access/xe_vm_uvma_access/ (Matt)
> > >
> > > Signed-off-by: Andrzej Hajda <andrzej.hajda at intel.com>
> > > Signed-off-by: Maciej Patelczyk <maciej.patelczyk at intel.com>
> > > Signed-off-by: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > > Reviewed-by: Jonathan Cavitt <jonathan.cavitt at intel.com> #v1
> > > ---
> > > drivers/gpu/drm/xe/xe_eudebug.c | 3 ++-
> > > drivers/gpu/drm/xe/xe_vm.c | 47
> > > +++++++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_vm.h | 3 +++
> > > 3 files changed, 52 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_eudebug.c
> > > b/drivers/gpu/drm/xe/xe_eudebug.c
> > > index 9d87df75348b..e5949e4dcad8 100644
> > > --- a/drivers/gpu/drm/xe/xe_eudebug.c
> > > +++ b/drivers/gpu/drm/xe/xe_eudebug.c
> > > @@ -3076,7 +3076,8 @@ static int xe_eudebug_vma_access(struct
> > > xe_vma *vma, u64 offset_in_vma,
> > > return ret;
> > > }
> > > - return -EINVAL;
> > > + return xe_vm_userptr_access(to_userptr_vma(vma),
> > > offset_in_vma,
> > > + buf, bytes, write);
> > > }
> > > static int xe_eudebug_vm_access(struct xe_vm *vm, u64 offset,
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > b/drivers/gpu/drm/xe/xe_vm.c
> > > index 0f17bc8b627b..224ff9e16941 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -3414,3 +3414,50 @@ void xe_vm_snapshot_free(struct
> > > xe_vm_snapshot *snap)
> > > }
> > > kvfree(snap);
> > > }
> > > +
> > > +int xe_vm_userptr_access(struct xe_userptr_vma *uvma, u64
> > > offset,
> > > + void *buf, u64 len, bool write)
> > > +{
> > > + struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> > > + struct xe_userptr *up = &uvma->userptr;
> > > + struct xe_res_cursor cur = {};
> > > + int cur_len, ret = 0;
> > > +
> > > + while (true) {
> > > + down_read(&vm->userptr.notifier_lock);
> > > + if (!xe_vma_userptr_check_repin(uvma))
> > > + break;
> > > +
> > > + spin_lock(&vm->userptr.invalidated_lock);
> > > + list_del_init(&uvma->userptr.invalidate_link);
> > > + spin_unlock(&vm->userptr.invalidated_lock);
> > > +
> > > + up_read(&vm->userptr.notifier_lock);
> > > + ret = xe_vma_userptr_pin_pages(uvma);
> > > + if (ret)
> > > + return ret;
> > > + }
> > > +
> > > + if (!up->sg) {
> > > + ret = -EINVAL;
> > > + goto out_unlock_notifier;
> > > + }
> > > +
> > > + for (xe_res_first_sg_system(up->sg, offset, len, &cur);
> > > cur.remaining;
> > > + xe_res_next(&cur, cur_len)) {
> > > + void *ptr = kmap_local_page(sg_page(cur.sgl)) +
> > > cur.start;
> >
> > The interface basically creates a side channel to access userptrs
> > in the way
> > an userspace application would do without actually going through
> > userspace.
> >
> > That is generally not something a device driver should ever do as
> > far as I
> > can see.
> >
> > > +
> > > + cur_len = min(cur.size, cur.remaining);
> > > + if (write)
> > > + memcpy(ptr, buf, cur_len);
> > > + else
> > > + memcpy(buf, ptr, cur_len);
> > > + kunmap_local(ptr);
> > > + buf += cur_len;
> > > + }
> > > + ret = len;
> > > +
> > > +out_unlock_notifier:
> > > + up_read(&vm->userptr.notifier_lock);
> >
> > I just strongly hope that this will prevent the mapping from
> > changing.
> >
> > Regards,
> > Christian.
> >
> > > + return ret;
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.h
> > > b/drivers/gpu/drm/xe/xe_vm.h
> > > index 23adb7442881..372ad40ad67f 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > > @@ -280,3 +280,6 @@ struct xe_vm_snapshot
> > > *xe_vm_snapshot_capture(struct xe_vm *vm);
> > > void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot
> > > *snap);
> > > void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct
> > > drm_printer *p);
> > > void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> > > +
> > > +int xe_vm_userptr_access(struct xe_userptr_vma *uvma, u64
> > > offset,
> > > + void *buf, u64 len, bool write);
> >
>
More information about the dri-devel
mailing list