[PATCH v3 19/19] drm/xe/bo: Update atomic_access attribute on madvise
Ghimiray, Himal Prasad
himal.prasad.ghimiray at intel.com
Thu May 29 03:03:39 UTC 2025
On 29-05-2025 05:16, Matthew Brost wrote:
> On Tue, May 27, 2025 at 10:10:03PM +0530, Himal Prasad Ghimiray wrote:
>> Update the bo_atomic_access based on user-provided input and determine
>> the migration to smem during a CPU fault
>>
>> v2 (Matthew Brost)
>> - Avoid cpu unmapping if bo is already in smem
>> - check atomics on smem too for ioctl
>> - Add comments
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray at intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 21 ++++++++++++--
>> drivers/gpu/drm/xe/xe_vm.c | 11 ++++++--
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 45 ++++++++++++++++++++++++++++--
>> 3 files changed, 69 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index d99d91fe8aa9..9072e8ae3f3e 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -1662,6 +1662,12 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
>> }
>> }
>>
>> +static bool should_migrate_to_smem(struct xe_bo *bo)
>> +{
>
> xe_bo_assert_held, more on that in reply to previous patch.
Sure
>
>> + return bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL ||
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU;
>> +}
>> +
>
> Hmm, this is tricky. I guess this means sharded atomics on BOs do not
> just work whereas for SVM they do (i.e., DRM_XE_VMA_ATOMIC_UNDEFINED
> means atomics do not work for BOs but for SVM they do). I suppose this
> is the current behavior. I think this will need to be document in the
> uAPI kernel doc.
Makes sense
>
>> static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> {
>> struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
>> @@ -1670,7 +1676,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> struct xe_bo *bo = ttm_to_xe_bo(tbo);
>> bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
>> vm_fault_t ret;
>> - int idx;
>> + int idx, r = 0;
>>
>> if (needs_rpm)
>> xe_pm_runtime_get(xe);
>> @@ -1682,8 +1688,17 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> if (drm_dev_enter(ddev, &idx)) {
>> trace_xe_bo_cpu_fault(bo);
>>
>> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
>> - TTM_BO_VM_NUM_PREFAULT);
>> + if (should_migrate_to_smem(bo)) {
>> + r = xe_bo_migrate(bo, XE_PL_TT);
>> + if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
>> + ret = VM_FAULT_NOPAGE;
>> + else if (r)
>> + ret = VM_FAULT_SIGBUS;
>> + }
>> + if (!ret)
>> + ret = ttm_bo_vm_fault_reserved(vmf,
>> + vmf->vma->vm_page_prot,
>> + TTM_BO_VM_NUM_PREFAULT);
>> drm_dev_exit(idx);
>> } else {
>> ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 9611d7ca2bed..1bdf85c12374 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -3116,9 +3116,16 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> err = vma_lock_and_validate(exec,
>> gpuva_to_vma(op->base.prefetch.va),
>> false);
>> - if (!err && !xe_vma_has_no_bo(vma))
>> - err = xe_bo_migrate(xe_vma_bo(vma),
>> + if (!err && !xe_vma_has_no_bo(vma)) {
>> + struct xe_bo *bo = xe_vma_bo(vma);
>> +
>> + if (region == 0 && !vm->xe->info.has_device_atomics_on_smem &&
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
>> + region = 1;
>
> I wonder if it better to just leave region as is and let the next atomic
> fault trigger the migration.
Ok. lets do it that way.
>
>> +
>> + err = xe_bo_migrate(bo,
>> region_to_mem_type[region]);
>> + }
>> break;
>> }
>> default:
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 0f0b94cb43f2..e048eb48826c 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -82,15 +82,54 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - int i;
>> + struct xe_bo *bo;
>> + int err, i;
>>
>> xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
>> xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
>> ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
>>
>
> Do you sanitize ops.atomic.val prior to this? Also do we disallow a user
> setting DRM_XE_VMA_ATOMIC_UNDEFINED? If not, then this needs to be >=
> DRM_XE_VMA_ATOMIC_UNDEFINED.
Agreed it should be >= DRM_XE_VMA_ATOMIC_UNDEFINED. And instead of
assertion will sanitize it here only.
>
>> - for (i = 0; i < num_vmas; i++)
>> + for (i = 0; i < num_vmas; i++) {
>> vmas[i]->attr.atomic_access = ops.atomic.val;
>> - /*TODO: handle bo backed vmas */
>> +
>> + bo = xe_vma_bo(vmas[i]);
>> + if (!bo)
>> + continue;
>> +
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_CPU &&
>> + !(bo->flags & XE_BO_FLAG_SYSTEM)))
>> + return -EINVAL;
>> +
>
> Note when we fail here (or anywhere else in madvise), we could be in a
> state where madvise has partially completed. I think that is actually ok
> as nothing in madvise is fatal as we are just changing attributes. But I
> think we need to document this in the uAPI kernel doc that if madvise
> fails, the state of madvise attributes are undefined.
Will add in kernel-doc of uAPI.
>
> In practice this really should never fail unless a user is giving bad
> input or extreme memory pressure and kmalloc fails.
>
> Matt
>
>> + /* NOTE: The following atomic checks are platform-specific. For example,
>> + * if a device supports CXL atomics, these may not be necessary or
>> + * may behave differently.
>> + */
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_DEVICE &&
>> + !(bo->flags & XE_BO_FLAG_VRAM0) &&
>> + !(bo->flags & XE_BO_FLAG_VRAM1) &&
>> + !(bo->flags & XE_BO_FLAG_SYSTEM &&
>> + xe->info.has_device_atomics_on_smem)))
>> + return -EINVAL;
>> +
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_GLOBAL &&
>> + (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
>> + (!(bo->flags & XE_BO_FLAG_VRAM0) &&
>> + !(bo->flags & XE_BO_FLAG_VRAM1)))))
>> + return -EINVAL;
>> +
>> + err = xe_bo_lock(bo, true);
>> + if (err)
>> + return err;
>> + bo->attr.atomic_access = ops.atomic.val;
>> +
>> + /* Invalidate cpu page table, so bo can migrate to smem in next access */
>> + if (xe_bo_is_vram(bo) &&
>> + (bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU ||
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL))
>> + ttm_bo_unmap_virtual(&bo->ttm);
>> +
>> + xe_bo_unlock(bo);
>> + }
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
More information about the Intel-xe
mailing list