[PATCH v2 2/3] drm/xe/userptr: fix EFAULT handling

Matthew Auld matthew.auld at intel.com
Mon Feb 17 09:19:11 UTC 2025


On 15/02/2025 01:23, Matthew Brost wrote:
> On Fri, Feb 14, 2025 at 05:05:29PM +0000, Matthew Auld wrote:
>> Currently we treat EFAULT from hmm_range_fault() as a non-fatal error
>> when called from xe_vm_userptr_pin() with the idea that we want to avoid
>> killing the entire vm and chucking an error, under the assumption that
>> the user just did an unmap or something, and has no intention of
>> actually touching that memory from the GPU.  At this point we have
>> already zapped the PTEs so any access should generate a page fault, and
>> if the pin fails there also it will then become fatal.
>>
>> However it looks like it's possible for the userptr vma to still be on
>> the rebind list in preempt_rebind_work_func(), if we had to retry the
>> pin again due to something happening in the caller before we did the
>> rebind step, but in the meantime needing to re-validate the userptr and
>> this time hitting the EFAULT.
>>
>> This might explain an internal user report of hitting:
>>
>> [  191.738349] WARNING: CPU: 1 PID: 157 at drivers/gpu/drm/xe/xe_res_cursor.h:158 xe_pt_stage_bind.constprop.0+0x60a/0x6b0 [xe]
>> [  191.738551] Workqueue: xe-ordered-wq preempt_rebind_work_func [xe]
>> [  191.738616] RIP: 0010:xe_pt_stage_bind.constprop.0+0x60a/0x6b0 [xe]
>> [  191.738690] Call Trace:
>> [  191.738692]  <TASK>
>> [  191.738694]  ? show_regs+0x69/0x80
>> [  191.738698]  ? __warn+0x93/0x1a0
>> [  191.738703]  ? xe_pt_stage_bind.constprop.0+0x60a/0x6b0 [xe]
>> [  191.738759]  ? report_bug+0x18f/0x1a0
>> [  191.738764]  ? handle_bug+0x63/0xa0
>> [  191.738767]  ? exc_invalid_op+0x19/0x70
>> [  191.738770]  ? asm_exc_invalid_op+0x1b/0x20
>> [  191.738777]  ? xe_pt_stage_bind.constprop.0+0x60a/0x6b0 [xe]
>> [  191.738834]  ? ret_from_fork_asm+0x1a/0x30
>> [  191.738849]  bind_op_prepare+0x105/0x7b0 [xe]
>> [  191.738906]  ? dma_resv_reserve_fences+0x301/0x380
>> [  191.738912]  xe_pt_update_ops_prepare+0x28c/0x4b0 [xe]
>> [  191.738966]  ? kmemleak_alloc+0x4b/0x80
>> [  191.738973]  ops_execute+0x188/0x9d0 [xe]
>> [  191.739036]  xe_vm_rebind+0x4ce/0x5a0 [xe]
>> [  191.739098]  ? trace_hardirqs_on+0x4d/0x60
>> [  191.739112]  preempt_rebind_work_func+0x76f/0xd00 [xe]
>>
>> Followed by NPD, when running some workload, since the sg was never
>> actually populated but the vma is still marked for rebind when it should
>> be skipped for this special EFAULT case. And from the logs it does seem
>> like we hit this special EFAULT case before the explosions.
>>
> 
> It would be nice to verify if this fixes the bug report.

Yes, reporter said it fixes it. Or at least the previous version did. 
See GSD-10562 if you are curious. Will re-phrase the commit message to 
make that clear.

> 
>> v2 (MattB):
>>   - Move earlier
>>
>> Fixes: 521db22a1d70 ("drm/xe: Invalidate userptr VMA on page pin fault")
>> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
>> Cc: Matthew Brost <matthew.brost at intel.com>
> 
> Anyways, LGTM:
> Reviewed-by: Matthew Brost <matthew.brost at intel.com>

Thanks.

> 
>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>> Cc: <stable at vger.kernel.org> # v6.10+
>> ---
>>   drivers/gpu/drm/xe/xe_vm.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 668b0bde7822..f36e2cc1d155 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -681,6 +681,18 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
>>   		err = xe_vma_userptr_pin_pages(uvma);
>>   		if (err == -EFAULT) {
>>   			list_del_init(&uvma->userptr.repin_link);
>> +			/*
>> +			 * We might have already done the pin once already, but
>> +			 * then had to retry before the re-bind happened, due
>> +			 * some other condition in the caller, but in the
>> +			 * meantime the userptr got dinged by the notifier such
>> +			 * that we need to revalidate here, but this time we hit
>> +			 * the EFAULT. In such a case make sure we remove
>> +			 * ourselves from the rebind list to avoid going down in
>> +			 * flames.
>> +			 */
>> +			if (!list_empty(&uvma->vma.combined_links.rebind))
>> +				list_del_init(&uvma->vma.combined_links.rebind);
>>   
>>   			/* Wait for pending binds */
>>   			xe_vm_lock(vm, false);
>> -- 
>> 2.48.1
>>



More information about the Intel-xe mailing list