[PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released
Andrey Grodzovsky
andrey.grodzovsky at amd.com
Fri May 7 16:32:22 UTC 2021
On 2021-05-07 12:29 p.m., Daniel Vetter wrote:
> On Fri, Apr 30, 2021 at 12:10:57PM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-04-30 2:47 a.m., Christian König wrote:
>>>
>>>
>>> Am 29.04.21 um 19:06 schrieb Andrey Grodzovsky:
>>>>
>>>>
>>>> On 2021-04-29 3:18 a.m., Christian König wrote:
>>>>> I need to take another look at this part when I don't have a
>>>>> massive headache any more.
>>>>>
>>>>> Maybe split the patch set up into different parts, something like:
>>>>> 1. Adding general infrastructure.
>>>>> 2. Making sure all memory is unpolated.
>>>>> 3. Job and fence handling
>>>>
>>>> I am not sure you mean this patch here, maybe another one ?
>>>> Also note you already RBed it.
>>>
>>> No what I meant was to send out the patches before this one as #1 and #2.
>>>
>>> That is the easier stuff which can easily go into the drm-misc-next or
>>> amd-staging-drm-next branch.
>>>
>>> The scheduler stuff certainly need to go into drm-misc-next.
>>>
>>> Christian.
>>
>> Got you. I am fine with it. What we have here is a working hot-unplug
>> code but, one with potential use after free MMIO ranges frpom the zombie
>> device. The followup patches after this patch are all about preventing
>> this and so the patch-set up until this patch including, is functional
>> on it's own. While it's necessary to solve the above issue, it's has
>> complications as can be seen from the discussion with Daniel on later
>> patch in this series. Still, in my opinion it's better to rollout some
>> initial support to hot-unplug without use after free protection then
>> having no support for hot-unplug at all. It will also make the merge
>> work easier as I need to constantly rebase the patches on top latest
>> kernel and solve new regressions.
>>
>> Daniel - given the arguments above can you sound your opinion on this
>> approach ?
>
> I'm all for incrementally landing this, because it's really hard and
> tricky. We might need to go back to some of the decisions, or clarify
> things more, or more headaches and pondering how to fix all the parts
> that works best to make sure there's no nasty races right across hotunplug
> if you're unlucky enough.
>
> But yeah better aim for something and then readjust than bikeshed forever
> out of tree.
>
> Cheers, Daniel
Thanks, I will send out V6 limited in scope up to here and fixing
any relevant comments.
Andrey
>
>>
>> Andrey
>>>
>>>>
>>>> Andrey
>>>>
>>>>>
>>>>> Christian.
>>>>>
>>>>> Am 28.04.21 um 17:11 schrieb Andrey Grodzovsky:
>>>>>> Problem: If scheduler is already stopped by the time sched_entity
>>>>>> is released and entity's job_queue not empty I encountred
>>>>>> a hang in drm_sched_entity_flush. This is because
>>>>>> drm_sched_entity_is_idle
>>>>>> never becomes false.
>>>>>>
>>>>>> Fix: In drm_sched_fini detach all sched_entities from the
>>>>>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>>>>>> Also wakeup all those processes stuck in sched_entity flushing
>>>>>> as the scheduler main thread which wakes them up is stopped by now.
>>>>>>
>>>>>> v2:
>>>>>> Reverse order of drm_sched_rq_remove_entity and marking
>>>>>> s_entity as stopped to prevent reinserion back to rq due
>>>>>> to race.
>>>>>>
>>>>>> v3:
>>>>>> Drop drm_sched_rq_remove_entity, only modify entity->stopped
>>>>>> and check for it in drm_sched_entity_is_idle
>>>>>>
>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
>>>>>> Reviewed-by: Christian König <christian.koenig at amd.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/scheduler/sched_entity.c | 3 ++-
>>>>>> drivers/gpu/drm/scheduler/sched_main.c | 24
>>>>>> ++++++++++++++++++++++++
>>>>>> 2 files changed, 26 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> index f0790e9471d1..cb58f692dad9 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> @@ -116,7 +116,8 @@ static bool
>>>>>> drm_sched_entity_is_idle(struct drm_sched_entity *entity)
>>>>>> rmb(); /* for list_empty to work without lock */
>>>>>> if (list_empty(&entity->list) ||
>>>>>> - spsc_queue_count(&entity->job_queue) == 0)
>>>>>> + spsc_queue_count(&entity->job_queue) == 0 ||
>>>>>> + entity->stopped)
>>>>>> return true;
>>>>>> return false;
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> index 908b0b56032d..ba087354d0a8 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> @@ -897,9 +897,33 @@ EXPORT_SYMBOL(drm_sched_init);
>>>>>> */
>>>>>> void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>>>>> {
>>>>>> + struct drm_sched_entity *s_entity;
>>>>>> + int i;
>>>>>> +
>>>>>> if (sched->thread)
>>>>>> kthread_stop(sched->thread);
>>>>>> + for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >=
>>>>>> DRM_SCHED_PRIORITY_MIN; i--) {
>>>>>> + struct drm_sched_rq *rq = &sched->sched_rq[i];
>>>>>> +
>>>>>> + if (!rq)
>>>>>> + continue;
>>>>>> +
>>>>>> + spin_lock(&rq->lock);
>>>>>> + list_for_each_entry(s_entity, &rq->entities, list)
>>>>>> + /*
>>>>>> + * Prevents reinsertion and marks job_queue as idle,
>>>>>> + * it will removed from rq in drm_sched_entity_fini
>>>>>> + * eventually
>>>>>> + */
>>>>>> + s_entity->stopped = true;
>>>>>> + spin_unlock(&rq->lock);
>>>>>> +
>>>>>> + }
>>>>>> +
>>>>>> + /* Wakeup everyone stuck in drm_sched_entity_flush for
>>>>>> this scheduler */
>>>>>> + wake_up_all(&sched->job_scheduled);
>>>>>> +
>>>>>> /* Confirm no work left behind accessing device structures */
>>>>>> cancel_delayed_work_sync(&sched->work_tdr);
>>>>>
>>>
>
More information about the amd-gfx
mailing list