[PATCH v3 19/19] drm/xe/bo: Update atomic_access attribute on madvise
Matthew Brost
matthew.brost at intel.com
Thu May 29 18:30:07 UTC 2025
On Thu, May 29, 2025 at 11:24:28AM -0700, Matthew Brost wrote:
> On Thu, May 29, 2025 at 08:33:39AM +0530, Ghimiray, Himal Prasad wrote:
> >
> >
> > On 29-05-2025 05:16, Matthew Brost wrote:
> > > On Tue, May 27, 2025 at 10:10:03PM +0530, Himal Prasad Ghimiray wrote:
> > > > Update the bo_atomic_access based on user-provided input and determine
> > > > the migration to smem during a CPU fault
> > > >
> > > > v2 (Matthew Brost)
> > > > - Avoid cpu unmapping if bo is already in smem
> > > > - check atomics on smem too for ioctl
> > > > - Add comments
> > > >
> > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
> > > > ---
> > > > drivers/gpu/drm/xe/xe_bo.c | 21 ++++++++++++--
> > > > drivers/gpu/drm/xe/xe_vm.c | 11 ++++++--
> > > > drivers/gpu/drm/xe/xe_vm_madvise.c | 45 ++++++++++++++++++++++++++++--
> > > > 3 files changed, 69 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> > > > index d99d91fe8aa9..9072e8ae3f3e 100644
> > > > --- a/drivers/gpu/drm/xe/xe_bo.c
> > > > +++ b/drivers/gpu/drm/xe/xe_bo.c
> > > > @@ -1662,6 +1662,12 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
> > > > }
> > > > }
> > > > +static bool should_migrate_to_smem(struct xe_bo *bo)
> > > > +{
> > >
> > > xe_bo_assert_held, more on that in reply to previous patch.
> >
> > Sure
> >
> > >
> > > > + return bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL ||
> > > > + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU;
> > > > +}
> > > > +
> > >
> > > Hmm, this is tricky. I guess this means sharded atomics on BOs do not
> > > just work whereas for SVM they do (i.e., DRM_XE_VMA_ATOMIC_UNDEFINED
> > > means atomics do not work for BOs but for SVM they do). I suppose this
> > > is the current behavior. I think this will need to be document in the
> > > uAPI kernel doc.
> >
> > Makes sense
> >
> > >
> > > > static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> > > > {
> > > > struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
> > > > @@ -1670,7 +1676,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> > > > struct xe_bo *bo = ttm_to_xe_bo(tbo);
> > > > bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
> > > > vm_fault_t ret;
> > > > - int idx;
> > > > + int idx, r = 0;
> > > > if (needs_rpm)
> > > > xe_pm_runtime_get(xe);
> > > > @@ -1682,8 +1688,17 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> > > > if (drm_dev_enter(ddev, &idx)) {
> > > > trace_xe_bo_cpu_fault(bo);
> > > > - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
> > > > - TTM_BO_VM_NUM_PREFAULT);
> > > > + if (should_migrate_to_smem(bo)) {
> > > > + r = xe_bo_migrate(bo, XE_PL_TT);
> > > > + if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
> > > > + ret = VM_FAULT_NOPAGE;
> > > > + else if (r)
> > > > + ret = VM_FAULT_SIGBUS;
> > > > + }
> > > > + if (!ret)
> > > > + ret = ttm_bo_vm_fault_reserved(vmf,
> > > > + vmf->vma->vm_page_prot,
> > > > + TTM_BO_VM_NUM_PREFAULT);
> > > > drm_dev_exit(idx);
> > > > } else {
> > > > ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
> > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > > index 9611d7ca2bed..1bdf85c12374 100644
> > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > @@ -3116,9 +3116,16 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> > > > err = vma_lock_and_validate(exec,
> > > > gpuva_to_vma(op->base.prefetch.va),
> > > > false);
> > > > - if (!err && !xe_vma_has_no_bo(vma))
> > > > - err = xe_bo_migrate(xe_vma_bo(vma),
> > > > + if (!err && !xe_vma_has_no_bo(vma)) {
> > > > + struct xe_bo *bo = xe_vma_bo(vma);
> > > > +
> > > > + if (region == 0 && !vm->xe->info.has_device_atomics_on_smem &&
> > > > + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
> > > > + region = 1;
> > >
> > > I wonder if it better to just leave region as is and let the next atomic
> > > fault trigger the migration.
> >
> > Ok. lets do it that way.
> >
> > >
> > > > +
> > > > + err = xe_bo_migrate(bo,
> > > > region_to_mem_type[region]);
> > > > + }
> > > > break;
> > > > }
> > > > default:
> > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > index 0f0b94cb43f2..e048eb48826c 100644
> > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > @@ -82,15 +82,54 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> > > > struct xe_vma **vmas, int num_vmas,
> > > > struct drm_xe_madvise_ops ops)
> > > > {
> > > > - int i;
> > > > + struct xe_bo *bo;
> > > > + int err, i;
> > > > xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
> > > > xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
> > > > ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
> > >
> > > Do you sanitize ops.atomic.val prior to this? Also do we disallow a user
> > > setting DRM_XE_VMA_ATOMIC_UNDEFINED? If not, then this needs to be >=
> > > DRM_XE_VMA_ATOMIC_UNDEFINED.
> > Agreed it should be >= DRM_XE_VMA_ATOMIC_UNDEFINED. And instead of
> > assertion will sanitize it here only.
> >
> > >
> > > > - for (i = 0; i < num_vmas; i++)
> > > > + for (i = 0; i < num_vmas; i++) {
> > > > vmas[i]->attr.atomic_access = ops.atomic.val;
> > > > - /*TODO: handle bo backed vmas */
> > > > +
> > > > + bo = xe_vma_bo(vmas[i]);
> > > > + if (!bo)
> > > > + continue;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_CPU &&
> > > > + !(bo->flags & XE_BO_FLAG_SYSTEM)))
> > > > + return -EINVAL;
> > > > +
> > >
> > > Note when we fail here (or anywhere else in madvise), we could be in a
> > > state where madvise has partially completed. I think that is actually ok
> > > as nothing in madvise is fatal as we are just changing attributes. But I
> > > think we need to document this in the uAPI kernel doc that if madvise
> > > fails, the state of madvise attributes are undefined.
> >
> > Will add in kernel-doc of uAPI.
> >
>
> Actually, on second thought, it might be better to sanitize user input
> before attempting madvise. This is similar to vm_bind_ioctl_check_args.
> I think that would be cleaner.
>
> I believe we can make the failing state stable if we can avoid failures
> in madvise_funcs (i.e., by returning void), which should be possible if
> we take locks in non-interruptible modes (likely fine, as we’re not
> doing much inside any locks) and avoid mallocs (none are used in this
> series).
>
> We’d also have to restructure this loop:
>
> for (i = 0; i < args->num_ops; i++) {
> xe_vm_alloc_madvise_vma(vm, advs_ops[i].start, advs_ops[i].range);
>
> vmas = get_vmas(vm, &num_vmas, advs_ops[i].start, advs_ops[i].range);
> if (!vmas) {
> err = -ENOMEM;
> goto free_advs_ops;
> }
>
> attr_type = array_index_nospec(advs_ops[i].type, ARRAY_SIZE(madvise_funcs));
> err = madvise_funcs[attr_type](xe, vm, vmas, num_vmas, advs_ops[i]);
>
> kfree(vmas);
> vmas = NULL;
>
> if (err)
> goto free_advs_ops;
> }
>
> xe_vm_alloc_madvise_vma and get_vmas would run in the first loop (which
> can fail), followed by a second loop that calls madvise_funcs (which
> cannot fail). If the first loop fails, the worst-case scenario is that
> we've split some VMAs into smaller ones, but their attributes would
> remain the same as before the IOCTL.
>
Ah, as soon I typed this, I realized this doesn't work as this is
iterative process (each xe_vm_alloc_madvise_vma depends on the previous
madvise_funcs being done). So scratch the loop restructure but I still
think validating user input prior to madvise_funcs is a good idea, along
with madvise_funcs not being able to fail if possible.
Matt
> I think this approach would be better avoiding a unknown state on
> failure.
>
> Matt
>
> > >
> > > In practice this really should never fail unless a user is giving bad
> > > input or extreme memory pressure and kmalloc fails.
> > >
> > > Matt
> > >
> > > > + /* NOTE: The following atomic checks are platform-specific. For example,
> > > > + * if a device supports CXL atomics, these may not be necessary or
> > > > + * may behave differently.
> > > > + */
> > > > + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_DEVICE &&
> > > > + !(bo->flags & XE_BO_FLAG_VRAM0) &&
> > > > + !(bo->flags & XE_BO_FLAG_VRAM1) &&
> > > > + !(bo->flags & XE_BO_FLAG_SYSTEM &&
> > > > + xe->info.has_device_atomics_on_smem)))
> > > > + return -EINVAL;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_GLOBAL &&
> > > > + (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
> > > > + (!(bo->flags & XE_BO_FLAG_VRAM0) &&
> > > > + !(bo->flags & XE_BO_FLAG_VRAM1)))))
> > > > + return -EINVAL;
> > > > +
> > > > + err = xe_bo_lock(bo, true);
> > > > + if (err)
> > > > + return err;
> > > > + bo->attr.atomic_access = ops.atomic.val;
> > > > +
> > > > + /* Invalidate cpu page table, so bo can migrate to smem in next access */
> > > > + if (xe_bo_is_vram(bo) &&
> > > > + (bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU ||
> > > > + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL))
> > > > + ttm_bo_unmap_virtual(&bo->ttm);
> > > > +
> > > > + xe_bo_unlock(bo);
> > > > + }
> > > > return 0;
> > > > }
> > > > --
> > > > 2.34.1
> > > >
> >
More information about the Intel-xe
mailing list