[Intel-xe] [PATCH] drm/xe: Enable scratch page when page fault is enabled
Chang, Yu bruce
yu.bruce.chang at intel.com
Mon Aug 28 23:44:47 UTC 2023
> -----Original Message-----
> From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of Chang, Yu
> bruce
> Sent: Monday, August 28, 2023 3:35 PM
> To: Brost, Matthew <matthew.brost at intel.com>
> Cc: Summers, Stuart <stuart.summers at intel.com>; intel-
> xe at lists.freedesktop.org
> Subject: Re: [Intel-xe] [PATCH] drm/xe: Enable scratch page when page fault is
> enabled
>
>
>
> > -----Original Message-----
> > From: Brost, Matthew <matthew.brost at intel.com>
> > Sent: Monday, August 28, 2023 12:22 PM
> > To: Chang, Yu bruce <yu.bruce.chang at intel.com>
> > Cc: intel-xe at lists.freedesktop.org; Zeng, Oak <oak.zeng at intel.com>;
> > Welty, Brian <brian.welty at intel.com>; Vishwanathapura, Niranjana
> > <niranjana.vishwanathapura at intel.com>; Summers, Stuart
> > <stuart.summers at intel.com>
> > Subject: Re: [PATCH] drm/xe: Enable scratch page when page fault is
> > enabled
> >
> > On Sat, Aug 26, 2023 at 12:14:12AM +0000, Chang, Bruce wrote:
> > > i915 can use scratch page even when page fault is enabled, this
> > > patch is trying to port this feature over.
> > >
> >
> > I think we need to ask why do we support this before merging this.
> > Because the i915 does this is not a valid answer. Please explain why a
> > UMD needs this feature.
> >
> Sure, mainly for EU debugger support and any potential prefetch WA.
>
> Will add this in the commit comment.
>
> > > The current i915 solution changes page table directly which may be
> > > hard to make to upstream, so a more complex solution is needed to
> > > apply to the current Xe framework if following the existing i915 solution.
> > >
> > > This patch is trying to make the minimum impact to the existing
> > > driver, but still enable the scratch page support.
> > >
> > > So, the idea is to bind a scratch vma if the page fault is from an
> > > invalid access. This patch is taking advantage of null pte at this
> > > point, we may introduce a special vma for scratch vma if needed.
> > > After the bind, the user app can continue to run without causing a
> > > fatal failure or reset and stop.
> > >
> > > In case the app will bind this scratch vma to a valid address, it
> > > will fail the bind, this patch will handle the failre and unbind the
> > > scrach vma[s], so that the user binding will be retried to the valid address.
> > >
> > > This patch only kicks in when there is a failure for both page fault
> > > and bind, so it should has no impact to the exiating code path. On
> > > another hand, it uses actual page tables instead of special scratch
> > > page tables, so it enables potential not to invalidate TLBs when
> > > doing unbind if all upper layer page tables are still being used.
> > >
> > > tested on new scratch igt tests which will be sent out for review.
> > >
> > > Cc: Oak Zeng <oak.zeng at intel.com>
> > > Cc: Brian Welty <brian.welty at intel.com>
> > > Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura at intel.com>
> > > Cc: Stuart Summers <stuart.summers at intel.com>
> > > Cc: Matthew Brost <matthew.brost at intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_gt_pagefault.c | 9 ++++--
> > > drivers/gpu/drm/xe/xe_vm.c | 48 +++++++++++++++++++++++-----
> > > drivers/gpu/drm/xe/xe_vm.h | 2 ++
> > > 3 files changed, 49 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > index b6f781b3d9d7..524b38df3d7a 100644
> > > --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> > > @@ -137,8 +137,13 @@ static int handle_pagefault(struct xe_gt *gt,
> > > struct
> > pagefault *pf)
> > > write_locked = true;
> > > vma = lookup_vma(vm, pf->page_addr);
> > > if (!vma) {
> > > - ret = -EINVAL;
> > > - goto unlock_vm;
> > > + if (vm->flags & XE_VM_FLAG_SCRATCH_PAGE)
> > > + vma = xe_bind_scratch_vma(vm, pf->page_addr,
> > SZ_64K);
> >
> > I think this would be better.
> >
> > s/xe_bind_scratch_vma/xe_vm_create_scratch_vma
> >
> Will make the change.
>
> >
> > > +
> > > + if (!vma) {
> > > + ret = -EINVAL;
> > > + goto unlock_vm;
> > > + }
> > > }
> > >
> > > if (!xe_vma_is_userptr(vma) || !xe_vma_userptr_check_repin(vma)) {
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 389ac5ba8ddf..4c3d5d781b58 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -1262,7 +1262,8 @@ struct xe_vm *xe_vm_create(struct xe_device
> > > *xe,
> > u32 flags)
> > > }
> > > }
> > >
> > > - if (flags & XE_VM_FLAG_SCRATCH_PAGE) {
> > > + if (flags & XE_VM_FLAG_SCRATCH_PAGE &&
> > > + (!(flags & XE_VM_FLAG_FAULT_MODE))) {
> > > for_each_tile(tile, xe, id) {
> > > if (!vm->pt_root[id])
> > > continue;
> > > @@ -1998,10 +1999,6 @@ int xe_vm_create_ioctl(struct drm_device
> > > *dev,
> > void *data,
> > > if (XE_IOCTL_DBG(xe, args->flags & ~ALL_DRM_XE_VM_CREATE_FLAGS))
> > > return -EINVAL;
> > >
> > > - if (XE_IOCTL_DBG(xe, args->flags &
> > DRM_XE_VM_CREATE_SCRATCH_PAGE &&
> > > - args->flags & DRM_XE_VM_CREATE_FAULT_MODE))
> > > - return -EINVAL;
> > > -
> > > if (XE_IOCTL_DBG(xe, args->flags &
> > DRM_XE_VM_CREATE_COMPUTE_MODE &&
> > > args->flags & DRM_XE_VM_CREATE_FAULT_MODE))
> > > return -EINVAL;
> > > @@ -2783,6 +2780,39 @@ static int __xe_vma_op_execute(struct xe_vm
> > > *vm,
> > struct xe_vma *vma,
> > > return err;
> > > }
> > >
> > > +struct xe_vma *xe_bind_scratch_vma(struct xe_vm *vm, u64 addr, u64
> > > +size) {
> >
> > Nit, the size argument really isn't needed.
> >
> Sure, can hard code it.
>
> > > + struct xe_vma *vma = 0;
> > > +
> > > + if (!vm->size)
> >
> > xe_vm_is_closed_or_banned rather than vm->size check.
> >
> > > + return 0;
> >
> > Probably a ERR_PTR with correct return codes.
> >
> Will add this error check.
>
> > > +
> > > + vma = xe_vma_create(vm, NULL, 0, addr, addr + size - 1, false, true, 0);
> > > + if (!vma)
> > > + return 0;
> > > + xe_vm_insert_vma(vm, vma);
> > > +
> >
> > Need to check the return of xe_vm_insert_vma, also probably WARN on
> > this as it shouldn't ever fail.
> >
> Will change it
>
> > > + /* fault will handle the bind */
> > > +
> > > + return vma;
> > > +}
> > > +
> > > +int xe_unbind_scratch_vma(struct xe_vm *vm, u64 addr, u64 range) {
> > > + struct xe_vma *vma;
> > > +
> > > + vma = xe_vm_find_overlapping_vma(vm, addr, range);
> > > + if (!vma)
> > > + return -ENXIO;
> > > +
> > > + if (xe_vma_is_null(vma)) {
> > > + prep_vma_destroy(vm, vma, true);
> > > + xe_vm_unbind(vm, vma, NULL, NULL, 0, NULL, true, false);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > static int xe_vma_op_execute(struct xe_vm *vm, struct xe_vma_op
> > > *op) {
> > > int ret = 0;
> > > @@ -3205,7 +3235,6 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > > void
> > *data, struct drm_file *file)
> > > err = vm_bind_ioctl_check_args(xe, args, &bind_ops, &async);
> > > if (err)
> > > return err;
> > > -
> >
> > Not related.
> >
> > > if (args->exec_queue_id) {
> > > q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> > > if (XE_IOCTL_DBG(xe, !q)) {
> > > @@ -3352,10 +3381,13 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > > void
> > *data, struct drm_file *file)
> > > u64 range = bind_ops[i].range;
> > > u64 addr = bind_ops[i].addr;
> > > u32 op = bind_ops[i].op;
> > > -
> > > +retry:
> > > err = vm_bind_ioctl_lookup_vma(vm, bos[i], addr, range, op);
> > > - if (err)
> > > + if (err) {
> > > + if (!xe_unbind_scratch_vma(vm, addr, range))
> > > + goto retry;
> > > goto free_syncs;
> > > + }
> >
> > You don't need this change, GPUVA handles all of this (e.g. it will
> > create ops to unbind the old VMA, bind the new one).
> >
> > Matt
> >
>
> I was also concerned about this change. Good to know GPUVA can help here!
> Then the change will be a lot more cleaner.
>
> Thanks!
> Bruce
>
It seems
err = vm_bind_ioctl_lookup_vma(vm, bos[i], addr, range, op);
will prevent GPUVA as it will error out firstly. If I remove it, it seems working.
-Bruce
> > > }
> > >
> > > for (i = 0; i < args->num_binds; ++i) { diff --git
> > > a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h index
> > > 6de6e3edb24a..6447bed427b1 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > > @@ -212,6 +212,8 @@ int xe_vma_userptr_pin_pages(struct xe_vma
> > > *vma);
> > >
> > > int xe_vma_userptr_check_repin(struct xe_vma *vma);
> > >
> > > +struct xe_vma *xe_bind_scratch_vma(struct xe_vm *vm, u64 addr, u64
> > > +size);
> > > +
> > > /*
> > > * XE_ONSTACK_TV is used to size the tv_onstack array that is input
> > > * to xe_vm_lock_dma_resv() and xe_vm_unlock_dma_resv().
> > > --
> > > 2.25.1
> > >
More information about the Intel-xe
mailing list