[PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released
Christian König
ckoenig.leichtzumerken at gmail.com
Fri Apr 30 06:47:32 UTC 2021
Am 29.04.21 um 19:06 schrieb Andrey Grodzovsky:
>
>
> On 2021-04-29 3:18 a.m., Christian König wrote:
>> I need to take another look at this part when I don't have a massive
>> headache any more.
>>
>> Maybe split the patch set up into different parts, something like:
>> 1. Adding general infrastructure.
>> 2. Making sure all memory is unpolated.
>> 3. Job and fence handling
>
> I am not sure you mean this patch here, maybe another one ?
> Also note you already RBed it.
No what I meant was to send out the patches before this one as #1 and #2.
That is the easier stuff which can easily go into the drm-misc-next or
amd-staging-drm-next branch.
The scheduler stuff certainly need to go into drm-misc-next.
Christian.
>
> Andrey
>
>>
>> Christian.
>>
>> Am 28.04.21 um 17:11 schrieb Andrey Grodzovsky:
>>> Problem: If scheduler is already stopped by the time sched_entity
>>> is released and entity's job_queue not empty I encountred
>>> a hang in drm_sched_entity_flush. This is because
>>> drm_sched_entity_is_idle
>>> never becomes false.
>>>
>>> Fix: In drm_sched_fini detach all sched_entities from the
>>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>>> Also wakeup all those processes stuck in sched_entity flushing
>>> as the scheduler main thread which wakes them up is stopped by now.
>>>
>>> v2:
>>> Reverse order of drm_sched_rq_remove_entity and marking
>>> s_entity as stopped to prevent reinserion back to rq due
>>> to race.
>>>
>>> v3:
>>> Drop drm_sched_rq_remove_entity, only modify entity->stopped
>>> and check for it in drm_sched_entity_is_idle
>>>
>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
>>> Reviewed-by: Christian König <christian.koenig at amd.com>
>>> ---
>>> drivers/gpu/drm/scheduler/sched_entity.c | 3 ++-
>>> drivers/gpu/drm/scheduler/sched_main.c | 24
>>> ++++++++++++++++++++++++
>>> 2 files changed, 26 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c
>>> b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index f0790e9471d1..cb58f692dad9 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -116,7 +116,8 @@ static bool drm_sched_entity_is_idle(struct
>>> drm_sched_entity *entity)
>>> rmb(); /* for list_empty to work without lock */
>>> if (list_empty(&entity->list) ||
>>> - spsc_queue_count(&entity->job_queue) == 0)
>>> + spsc_queue_count(&entity->job_queue) == 0 ||
>>> + entity->stopped)
>>> return true;
>>> return false;
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 908b0b56032d..ba087354d0a8 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -897,9 +897,33 @@ EXPORT_SYMBOL(drm_sched_init);
>>> */
>>> void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>> {
>>> + struct drm_sched_entity *s_entity;
>>> + int i;
>>> +
>>> if (sched->thread)
>>> kthread_stop(sched->thread);
>>> + for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >=
>>> DRM_SCHED_PRIORITY_MIN; i--) {
>>> + struct drm_sched_rq *rq = &sched->sched_rq[i];
>>> +
>>> + if (!rq)
>>> + continue;
>>> +
>>> + spin_lock(&rq->lock);
>>> + list_for_each_entry(s_entity, &rq->entities, list)
>>> + /*
>>> + * Prevents reinsertion and marks job_queue as idle,
>>> + * it will removed from rq in drm_sched_entity_fini
>>> + * eventually
>>> + */
>>> + s_entity->stopped = true;
>>> + spin_unlock(&rq->lock);
>>> +
>>> + }
>>> +
>>> + /* Wakeup everyone stuck in drm_sched_entity_flush for this
>>> scheduler */
>>> + wake_up_all(&sched->job_scheduled);
>>> +
>>> /* Confirm no work left behind accessing device structures */
>>> cancel_delayed_work_sync(&sched->work_tdr);
>>
More information about the dri-devel
mailing list