[PATCH v4] drm/xe/ufence: Signal ufence faster when possible
Nirmoy Das
nirmoy.das at linux.intel.com
Mon Nov 11 14:49:48 UTC 2024
On 11/10/2024 4:51 AM, Matthew Brost wrote:
> On Fri, Oct 18, 2024 at 05:29:58PM +0200, Nirmoy Das wrote:
>> When the backing fence is already signaled, the ufence can be
>> immediately signaled without queuing in the ordered work queue.
>> This should also reduce load from the xe ordered_wq and won't
>> block signaling a ufence which doesn't require any serialization.
>>
>> v2: fix system_wq typo
>> v3: signal immediately instead of queuing in system_wq (Matt B)
>> v4: revert back to v2 of using workqueue because of locking issue
>> and remote viewing a different mm struct.
>> Use Xe's unordered_wq which should be less congested than global
>> one.
>>
>> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1630
>> Cc: Matthew Auld <matthew.auld at intel.com>
>> Cc: Matthew Brost <matthew.brost at intel.com>
>> Signed-off-by: Nirmoy Das <nirmoy.das at intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_sync.c | 24 +++++++++++++++++++++---
>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
>> index a90480c6aecf..7a1558c7ce09 100644
>> --- a/drivers/gpu/drm/xe/xe_sync.c
>> +++ b/drivers/gpu/drm/xe/xe_sync.c
>> @@ -92,18 +92,27 @@ static void user_fence_worker(struct work_struct *w)
>> user_fence_put(ufence);
>> }
>>
>> -static void kick_ufence(struct xe_user_fence *ufence, struct dma_fence *fence)
>> +static void kick_ufence_ordered(struct xe_user_fence *ufence,
>> + struct dma_fence *fence)
>> {
>> INIT_WORK(&ufence->worker, user_fence_worker);
>> queue_work(ufence->xe->ordered_wq, &ufence->worker);
>> dma_fence_put(fence);
>> }
>>
>> +static void kick_ufence_unordered(struct xe_user_fence *ufence,
>> + struct dma_fence *fence)
>> +{
>> + INIT_WORK(&ufence->worker, user_fence_worker);
>> + queue_work(ufence->xe->unordered_wq, &ufence->worker);
> This doesn't work, if this has been merged it needs to be reverted.
This is not merged.
>
> Consider the case when a user requests two user fence writes to same
> address in the fashion of a seqno (i.e. 2nd write is one more the value
> of the first). If we use an unordered work queue, the 2nd write could
> pass the first resulting in the seqno being incorrect.
OK, yes, this patch will break this behavior. This was to fix the ufence timeout issue which now seems to be
coming for the LNL scheduling issue rather then loaded xe->ordered_wq. I will drop this patch now until there is real need to optimize this path.
Thanks,
Nirmoy
>
> Matt
>
>> + dma_fence_put(fence);
>> +}
>> +
>> static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
>> {
>> struct xe_user_fence *ufence = container_of(cb, struct xe_user_fence, cb);
>>
>> - kick_ufence(ufence, fence);
>> + kick_ufence_ordered(ufence, fence);
>> }
>>
>> int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef,
>> @@ -239,7 +248,16 @@ void xe_sync_entry_signal(struct xe_sync_entry *sync, struct dma_fence *fence)
>> err = dma_fence_add_callback(fence, &sync->ufence->cb,
>> user_fence_cb);
>> if (err == -ENOENT) {
>> - kick_ufence(sync->ufence, fence);
>> + /*
>> + * use unordered_wq to schedule it faster and to keep
>> + * the ordered_wq less loaded as serialization is not
>> + * needed for when the fence is already signaled.
>> + *
>> + * This needs to be done with a wq here to avoid locking
>> + * issue when a ufence addr is backed by a bo and also
>> + * tsk->mm needs to null to call kthread_use_mm().
>> + */
>> + kick_ufence_unordered(sync->ufence, fence);
>> } else if (err) {
>> XE_WARN_ON("failed to add user fence");
>> user_fence_put(sync->ufence);
>> --
>> 2.46.0
>>
More information about the Intel-xe
mailing list