[PATCH v2 1/3] drm/xe/userptr: restore invalidation list on error

Matthew Auld matthew.auld at intel.com
Fri Feb 21 13:17:43 UTC 2025


On 21/02/2025 11:20, Thomas Hellström wrote:
> On Fri, 2025-02-21 at 11:11 +0000, Matthew Auld wrote:
>> On 20/02/2025 23:52, Matthew Brost wrote:
>>> On Mon, Feb 17, 2025 at 07:58:11PM -0800, Matthew Brost wrote:
>>>> On Mon, Feb 17, 2025 at 09:38:26AM +0000, Matthew Auld wrote:
>>>>> On 15/02/2025 01:28, Matthew Brost wrote:
>>>>>> On Fri, Feb 14, 2025 at 05:05:28PM +0000, Matthew Auld wrote:
>>>>>>> On error restore anything still on the pin_list back to the
>>>>>>> invalidation
>>>>>>> list on error. For the actual pin, so long as the vma is
>>>>>>> tracked on
>>>>>>> either list it should get picked up on the next pin,
>>>>>>> however it looks
>>>>>>> possible for the vma to get nuked but still be present on
>>>>>>> this per vm
>>>>>>> pin_list leading to corruption. An alternative might be
>>>>>>> then to instead
>>>>>>> just remove the link when destroying the vma.
>>>>>>>
>>>>>>> Fixes: ed2bdf3b264d ("drm/xe/vm: Subclass userptr vmas")
>>>>>>> Suggested-by: Matthew Brost <matthew.brost at intel.com>
>>>>>>> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
>>>>>>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>>>>>>> Cc: <stable at vger.kernel.org> # v6.8+
>>>>>>> ---
>>>>>>>     drivers/gpu/drm/xe/xe_vm.c | 26 +++++++++++++++++++-----
>>>>>>> --
>>>>>>>     1 file changed, 19 insertions(+), 7 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c
>>>>>>> b/drivers/gpu/drm/xe/xe_vm.c
>>>>>>> index d664f2e418b2..668b0bde7822 100644
>>>>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>>>>> @@ -670,12 +670,12 @@ int xe_vm_userptr_pin(struct xe_vm
>>>>>>> *vm)
>>>>>>>     	list_for_each_entry_safe(uvma, next, &vm-
>>>>>>>> userptr.invalidated,
>>>>>>>     				 userptr.invalidate_link)
>>>>>>> {
>>>>>>>     		list_del_init(&uvma-
>>>>>>>> userptr.invalidate_link);
>>>>>>> -		list_move_tail(&uvma->userptr.repin_link,
>>>>>>> -			       &vm->userptr.repin_list);
>>>>>>> +		list_add_tail(&uvma->userptr.repin_link,
>>>>>>> +			      &vm->userptr.repin_list);
>>>>>>
>>>>>> Why this change?
>>>>>
>>>>> Just that with this patch the repin_link should now always be
>>>>> empty at this
>>>>> point, I think. add should complain if that is not the case.
>>>>>
>>>>
>>>> If it is always expected to be empty, then yea maybe add a
>>>> xe_assert for
>>>> this as the list management is pretty tricky.
>>>>
>>>>>>
>>>>>>>     	}
>>>>>>>     	spin_unlock(&vm->userptr.invalidated_lock);
>>>>>>> -	/* Pin and move to temporary list */
>>>>>>> +	/* Pin and move to bind list */
>>>>>>>     	list_for_each_entry_safe(uvma, next, &vm-
>>>>>>>> userptr.repin_list,
>>>>>>>     				 userptr.repin_link) {
>>>>>>>     		err = xe_vma_userptr_pin_pages(uvma);
>>>>>>> @@ -691,10 +691,10 @@ int xe_vm_userptr_pin(struct xe_vm
>>>>>>> *vm)
>>>>>>>     			err = xe_vm_invalidate_vma(&uvma-
>>>>>>>> vma);
>>>>>>>     			xe_vm_unlock(vm);
>>>>>>>     			if (err)
>>>>>>> -				return err;
>>>>>>> +				break;
>>>>>>>     		} else {
>>>>>>> -			if (err < 0)
>>>>>>> -				return err;
>>>>>>> +			if (err)
>>>>>>> +				break;
>>>>>>>     			list_del_init(&uvma-
>>>>>>>> userptr.repin_link);
>>>>>>>     			list_move_tail(&uvma-
>>>>>>>> vma.combined_links.rebind,
>>>>>>> @@ -702,7 +702,19 @@ int xe_vm_userptr_pin(struct xe_vm
>>>>>>> *vm)
>>>>>>>     		}
>>>>>>>     	}
>>>>>>> -	return 0;
>>>>>>> +	if (err) {
>>>>>>> +		down_write(&vm->userptr.notifier_lock);
>>>>>>
>>>>>> Can you explain why you take the notifier lock here? I don't
>>>>>> think this
>>>>>> required unless I'm missing something.
>>>>>
>>>>> For the invalidated list, the docs say:
>>>>>
>>>>> "Removing items from the list additionally requires @lock in
>>>>> write mode, and
>>>>> adding items to the list requires the @userptr.notifer_lock in
>>>>> write mode."
>>>>>
>>>>> Not sure if the docs needs to be updated here?
>>>>>
>>>>
>>>> Oh. I believe the part of comment for 'adding items to the list
>>>> requires the @userptr.notifer_lock in write mode' really means
>>>> something
>>>> like this:
>>>>
>>>> 'When adding to @vm->userptr.invalidated in the notifier the
>>>> @userptr.notifer_lock in write mode protects against concurrent
>>>> VM binds
>>>> from setting up newly invalidated pages.'
>>>>
>>>> So with above and since this code path is in the VM bind path
>>>> (i.e. we
>>>> are not racing with other binds) I think the
>>>> vm->userptr.invalidated_lock is sufficient. Maybe ask Thomas if
>>>> he
>>>> agrees here.
>>>>
>>>
>>> After some discussion with Thomas, removing notifier lock here is
>>> safe.
>>
>> Thanks for confirming.
> 
> So basically that was to protect exec when it takes the notifier lock
> in read mode, and checks that there are no invalidated userptr, that
> needs to stay true as lock as the notifier lock is held.
> 
> But as MBrost pointed out, the vm lock is also held, so I think the
> kerneldoc should be updated so that the requirement is that either the
> notifier lock is held in write mode, or the vm lock in write mode.
> 
> As a general comment these locking protection docs are there to
> simplify reading and writing of the code so that when new code is
> written and reviewed, we should just keep to the rules to avoid
> auditing all locations in the driver where the protected data-structure
> is touched. If we want to update those docs I think a complete such
> audit needs to be done and all use-cases are understood.

For this patch is the preference to go with the slightly overzealous 
locking for now? Circling back around later, fixing the doc when adding 
the new helper, and at the same time also audit all callers?

> 
> /Thomas
> 
> 
>>
>>>
>>> However, for adding is either userptr.notifer_lock || vm->lock to
>>> also
>>> avoid races between binds, execs, and rebind worker.
>>>
>>> I'd like update the documentation and add a helper like this:
>>>
>>> void xe_vma_userptr_add_invalidated(struct xe_userptr_vma *uvma)
>>> {
>>>          struct xe_vm *vm = xe_vma_vm(&uvma->vma);
>>>
>>>          lockdep_assert(lock_is_held_type(&vm->lock.dep_map, 1) ||
>>>                         lock_is_held_type(&vm-
>>>> userptr.notifier_lock.dep_map, 1));
>>>
>>>          spin_lock(&vm->userptr.invalidated_lock);
>>>          list_move_tail(&uvma->userptr.invalidate_link,
>>>                         &vm->userptr.invalidated);
>>>          spin_unlock(&vm->userptr.invalidated_lock);
>>> }
>>
>> Sounds good.
>>
>>>
>>> However, let's delay the helper until this series and recently post
>>> series of mine [1] merge as both are fixes series and hoping for a
>>> clean
>>> backport.
>>
>> Makes sense.
>>
>>>
>>> Matt
>>>
>>> [1] https://patchwork.freedesktop.org/series/145198/
>>>
>>>> Matt
>>>>
>>>>>>
>>>>>> Matt
>>>>>>
>>>>>>> +		spin_lock(&vm->userptr.invalidated_lock);
>>>>>>> +		list_for_each_entry_safe(uvma, next, &vm-
>>>>>>>> userptr.repin_list,
>>>>>>> +					
>>>>>>> userptr.repin_link) {
>>>>>>> +			list_del_init(&uvma-
>>>>>>>> userptr.repin_link);
>>>>>>> +			list_move_tail(&uvma-
>>>>>>>> userptr.invalidate_link,
>>>>>>> +				       &vm-
>>>>>>>> userptr.invalidated);
>>>>>>> +		}
>>>>>>> +		spin_unlock(&vm-
>>>>>>>> userptr.invalidated_lock);
>>>>>>> +		up_write(&vm->userptr.notifier_lock);
>>>>>>> +	}
>>>>>>> +	return err;
>>>>>>>     }
>>>>>>>     /**
>>>>>>> -- 
>>>>>>> 2.48.1
>>>>>>>
>>>>>
>>
> 



More information about the Intel-xe mailing list