[Intel-xe] [PATCH] drm/xe: Only set xe_vma_op.map fields for GPUVA map operations
Welty, Brian
brian.welty at intel.com
Fri Nov 17 19:30:14 UTC 2023
On 11/17/2023 12:31 AM, Matthew Brost wrote:
> On Thu, Nov 16, 2023 at 11:57:09AM -0800, Welty, Brian wrote:
>>
>>
>> On 11/14/2023 7:15 PM, Matthew Brost wrote:
>>> XE_VM_BIND_OP_MAP_* IOCTL operations can result in GPUVA unmap, remap,
>>> or map operations in vm_bind_ioctl_ops_create. The xe_vma_op.map fields
>>> are blindly set which is incorrect for GPUVA unmap or remap operations.
>>> Fix this by only setting xe_vma_op.map for GPUVA map operations. Also
>>> restructure a bit vm_bind_ioctl_ops_create to make the code a bit more
>>> readable.
>>>
>>> Reported-by: Dafna Hirschfeld <dhirschfeld at habana.ai>
>>> Signec-off-by: Matthew Brost <matthew.brost at intel.com>
>>
>> Typo in your signed-off !
>>
>
> Yep, will fix.
>
>> One comment below.
>>
>>> ---
>>> drivers/gpu/drm/xe/xe_vm.c | 59 ++++++++++++--------------------------
>>> 1 file changed, 18 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>> index d45f4f1d490f..1049869b5a6f 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>> @@ -2187,42 +2187,12 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>>
>>
>> Looks like you can trim the arguments to this function....
>> At a glance, looks like flags, tile_mask, and region no longer need to be
>> passed.
>>
>
> These are still used at the bottom of the function...
Oops. I read the patch as if you had moved that blob of code at the end
into the caller for some reason.... either way would have worked.
Reviewed-by: Brian Welty <brian.welty at intel.com>
>
>> -Brian
>>
>>
>>> case XE_VM_BIND_OP_MAP_USERPTR:
>>> ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
>>> obj, bo_offset_or_userptr);
>>> - if (IS_ERR(ops))
>>> - return ops;
>>> -
>>> - drm_gpuva_for_each_op(__op, ops) {
>>> - struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>>> -
>>> - op->tile_mask = tile_mask;
>>> - op->map.immediate =
>>> - flags & XE_VM_BIND_FLAG_IMMEDIATE;
>>> - op->map.read_only =
>>> - flags & XE_VM_BIND_FLAG_READONLY;
>>> - op->map.is_null = flags & XE_VM_BIND_FLAG_NULL;
>>> - }
>>> break;
>>> case XE_VM_BIND_OP_UNMAP:
>>> ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
>>> - if (IS_ERR(ops))
>>> - return ops;
>>> -
>>> - drm_gpuva_for_each_op(__op, ops) {
>>> - struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>>> -
>>> - op->tile_mask = tile_mask;
>>> - }
>>> break;
>>> case XE_VM_BIND_OP_PREFETCH:
>>> ops = drm_gpuvm_prefetch_ops_create(&vm->gpuvm, addr, range);
>>> - if (IS_ERR(ops))
>>> - return ops;
>>> -
>>> - drm_gpuva_for_each_op(__op, ops) {
>>> - struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>>> -
>>> - op->tile_mask = tile_mask;
>>> - op->prefetch.region = region;
>>> - }
>>> break;
>>> case XE_VM_BIND_OP_UNMAP_ALL:
>>> xe_assert(vm->xe, bo);
>>> @@ -2232,19 +2202,13 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>>> return ERR_PTR(err);
>>> ops = drm_gpuvm_gem_unmap_ops_create(&vm->gpuvm, obj);
>>> xe_bo_unlock(bo);
>>> - if (IS_ERR(ops))
>>> - return ops;
>>> -
>>> - drm_gpuva_for_each_op(__op, ops) {
>>> - struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>>> -
>>> - op->tile_mask = tile_mask;
>>> - }
>>> break;
>>> default:
>>> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
>>> ops = ERR_PTR(-EINVAL);
>>> }
>>> + if (IS_ERR(ops))
>>> + return ops;
>>> #ifdef TEST_VM_ASYNC_OPS_ERROR
>>> if (operation & FORCE_ASYNC_OP_ERROR) {
>>> @@ -2255,9 +2219,22 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>>> }
>>> #endif
>>> - if (!IS_ERR(ops))
>>> - drm_gpuva_for_each_op(__op, ops)
>>> - print_op(vm->xe, __op);
>>> + drm_gpuva_for_each_op(__op, ops) {
>>> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>>> +
>>> + op->tile_mask = tile_mask;
>>> + if (__op->op == DRM_GPUVA_OP_MAP) {
>>> + op->map.immediate =
>>> + flags & XE_VM_BIND_FLAG_IMMEDIATE;
>>> + op->map.read_only =
>>> + flags & XE_VM_BIND_FLAG_READONLY;
>>> + op->map.is_null = flags & XE_VM_BIND_FLAG_NULL;
>>> + } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
>>> + op->prefetch.region = region;
>>> + }
>
> Here is the use od flags, tile_mask, and region.
>
> Matt
>
>>> +
>>> + print_op(vm->xe, __op);
>>> + }
>>> return ops;
>>> }
More information about the Intel-xe
mailing list