[PATCH 12/18] drm/xe/eudebug: implement userptr_vma access

Matthew Brost matthew.brost at intel.com
Sat Oct 12 02:55:09 UTC 2024


On Sat, Oct 12, 2024 at 02:39:39AM +0000, Matthew Brost wrote:
> On Tue, Oct 01, 2024 at 05:43:00PM +0300, Mika Kuoppala wrote:
> > From: Andrzej Hajda <andrzej.hajda at intel.com>
> > 
> > Debugger needs to read/write program's vmas including userptr_vma.
> > Since hmm_range_fault is used to pin userptr vmas, it is possible
> > to map those vmas from debugger context.
> > 
> > v2: pin pages vs notifier, move to vm.c (Matthew)
> > 
> > Signed-off-by: Andrzej Hajda <andrzej.hajda at intel.com>
> > Signed-off-by: Maciej Patelczyk <maciej.patelczyk at intel.com>
> > Signed-off-by: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > Reviewed-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_eudebug.c |  2 +-
> >  drivers/gpu/drm/xe/xe_vm.c      | 47 +++++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/xe/xe_vm.h      |  3 +++
> >  3 files changed, 51 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
> > index edad6d533d0b..b09d7414cfe3 100644
> > --- a/drivers/gpu/drm/xe/xe_eudebug.c
> > +++ b/drivers/gpu/drm/xe/xe_eudebug.c
> > @@ -3023,7 +3023,7 @@ static int xe_eudebug_vma_access(struct xe_vma *vma, u64 offset,
> >  		return ret;
> >  	}
> >  
> > -	return -EINVAL;
> > +	return xe_uvma_access(to_userptr_vma(vma), offset, buf, bytes, write);
> >  }
> >  
> >  static int xe_eudebug_vm_access(struct xe_vm *vm, u64 offset,
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index a836dfc5a86f..5f891e76993b 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -3421,3 +3421,50 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> >  	}
> >  	kvfree(snap);
> >  }
> > +
> > +int xe_uvma_access(struct xe_userptr_vma *uvma, u64 offset,
> > +		   void *buf, u64 len, bool write)
> > +{
> > +	struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> > +	struct xe_userptr *up = &uvma->userptr;
> > +	struct xe_res_cursor cur = {};
> > +	int cur_len, ret = 0;
> > +
> > +	while (true) {
> > +		down_read(&vm->userptr.notifier_lock);
> > +		if (!xe_vma_userptr_check_repin(uvma))
> > +			break;
> > +
> > +		spin_lock(&vm->userptr.invalidated_lock);
> > +		list_del_init(&uvma->userptr.invalidate_link);
> > +		spin_unlock(&vm->userptr.invalidated_lock);
> > +
> > +		up_read(&vm->userptr.notifier_lock);
> > +		ret = xe_vma_userptr_pin_pages(uvma);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	if (!up->sg) {
> > +		ret = -EINVAL;
> > +		goto out_unlock_notifier;
> > +	}
> > +
> > +	for (xe_res_first_sg(up->sg, offset, len, &cur); cur.remaining;
> > +	     xe_res_next(&cur, cur_len)) {
> 
> This doesn't look right after reviewing [1].
> 
> A SG list is collection of IOVA which may be contain non-contiguous
> physical pages.
> 

This is unclear, let me try again.

A SG list is a collection of IOVA aka dma address. This is the the view
from the device to CPU's memory. Each IOVA may be a non-contiguous set
of physical pages if the IOMMU is on. Thus you can't just look at the
first page of the IOVA and know what the 2nd page is.

Hope this makes a bit more sense.

Matt
 
> I'm pretty sure if the EU debugger is enable you are going to have to
> save off all the pages returned from hmm_range_fault and kmap each page
> individually.
> 
> Matt
> 
> [1] https://patchwork.freedesktop.org/patch/619324/?series=139780&rev=3
> 
> > +		void *ptr = kmap_local_page(sg_page(cur.sgl)) + cur.start;
> > +
> > +		cur_len = min(cur.size, cur.remaining);
> > +		if (write)
> > +			memcpy(ptr, buf, cur_len);
> > +		else
> > +			memcpy(buf, ptr, cur_len);
> > +		kunmap_local(ptr);
> > +		buf += cur_len;
> > +	}
> > +	ret = len;
> > +
> > +out_unlock_notifier:
> > +	up_read(&vm->userptr.notifier_lock);
> > +	return ret;
> > +}
> > diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> > index c864dba35e1d..99b9a9b011de 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.h
> > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > @@ -281,3 +281,6 @@ struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
> >  void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
> >  void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
> >  void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> > +
> > +int xe_uvma_access(struct xe_userptr_vma *uvma, u64 offset,
> > +		   void *buf, u64 len, bool write);
> > -- 
> > 2.34.1
> > 


More information about the Intel-xe mailing list