[PATCH] drm/amd/display: Clear dm_state for fast updates

Christian König christian.koenig at amd.com
Mon Jul 27 19:28:15 UTC 2020


Am 27.07.20 um 16:05 schrieb Kazlauskas, Nicholas:
> On 2020-07-27 9:39 a.m., Christian König wrote:
>> Am 27.07.20 um 07:40 schrieb Mazin Rezk:
>>> This patch fixes a race condition that causes a use-after-free during
>>> amdgpu_dm_atomic_commit_tail. This can occur when 2 non-blocking 
>>> commits
>>> are requested and the second one finishes before the first. 
>>> Essentially,
>>> this bug occurs when the following sequence of events happens:
>>>
>>> 1. Non-blocking commit #1 is requested w/ a new dm_state #1 and is
>>> deferred to the workqueue.
>>>
>>> 2. Non-blocking commit #2 is requested w/ a new dm_state #2 and is
>>> deferred to the workqueue.
>>>
>>> 3. Commit #2 starts before commit #1, dm_state #1 is used in the
>>> commit_tail and commit #2 completes, freeing dm_state #1.
>>>
>>> 4. Commit #1 starts after commit #2 completes, uses the freed dm_state
>>> 1 and dereferences a freelist pointer while setting the context.
>>
>> Well I only have a one mile high view on this, but why don't you let 
>> the work items execute in order?
>>
>> That would be better anyway cause this way we don't trigger a cache 
>> line ping pong between CPUs.
>>
>> Christian.
>
> We use the DRM helpers for managing drm_atomic_commit_state and those 
> helpers internally push non-blocking commit work into the system 
> unbound work queue.

Mhm, well if you send those helper atomic commits in the order A,B and 
they execute it in the order B,A I would call that a bug :)

> While we could duplicate a copy of that code with nothing but the 
> workqueue changed that isn't something I'd really like to maintain 
> going forward.

I'm not talking about duplicating the code, I'm talking about fixing the 
helpers. I don't know that code well, but from the outside it sounds 
like a bug there.

And executing work items in the order they are submitted is trivial.

Had anybody pinged Daniel or other people familiar with the helper code 
about it?

Regards,
Christian.

>
> Regards,
> Nicholas Kazlauskas
>
>>
>>>
>>> Since this bug has only been spotted with fast commits, this patch 
>>> fixes
>>> the bug by clearing the dm_state instead of using the old dc_state for
>>> fast updates. In addition, since dm_state is only used for its dc_state
>>> and amdgpu_dm_atomic_commit_tail will retain the dc_state if none is 
>>> found,
>>> removing the dm_state should not have any consequences in fast updates.
>>>
>>> This use-after-free bug has existed for a while now, but only caused a
>>> noticeable issue starting from 5.7-rc1 due to 3202fa62f ("slub: 
>>> relocate
>>> freelist pointer to middle of object") moving the freelist pointer from
>>> dm_state->base (which was unused) to dm_state->context (which is
>>> dereferenced).
>>>
>>> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207383
>>> Fixes: bd200d190f45 ("drm/amd/display: Don't replace the dc_state 
>>> for fast updates")
>>> Reported-by: Duncan <1i5t5.duncan at cox.net>
>>> Signed-off-by: Mazin Rezk <mnrzk at protonmail.com>
>>> ---
>>>   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 36 
>>> ++++++++++++++-----
>>>   1 file changed, 27 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
>>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> index 86ffa0c2880f..710edc70e37e 100644
>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> @@ -8717,20 +8717,38 @@ static int amdgpu_dm_atomic_check(struct 
>>> drm_device *dev,
>>>            * the same resource. If we have a new DC context as part of
>>>            * the DM atomic state from validation we need to free it and
>>>            * retain the existing one instead.
>>> +         *
>>> +         * Furthermore, since the DM atomic state only contains the DC
>>> +         * context and can safely be annulled, we can free the state
>>> +         * and clear the associated private object now to free
>>> +         * some memory and avoid a possible use-after-free later.
>>>            */
>>> -        struct dm_atomic_state *new_dm_state, *old_dm_state;
>>>
>>> -        new_dm_state = dm_atomic_get_new_state(state);
>>> -        old_dm_state = dm_atomic_get_old_state(state);
>>> +        for (i = 0; i < state->num_private_objs; i++) {
>>> +            struct drm_private_obj *obj = state->private_objs[i].ptr;
>>>
>>> -        if (new_dm_state && old_dm_state) {
>>> -            if (new_dm_state->context)
>>> -                dc_release_state(new_dm_state->context);
>>> +            if (obj->funcs == adev->dm.atomic_obj.funcs) {
>>> +                int j = state->num_private_objs-1;
>>>
>>> -            new_dm_state->context = old_dm_state->context;
>>> +                dm_atomic_destroy_state(obj,
>>> +                        state->private_objs[i].state);
>>> +
>>> +                /* If i is not at the end of the array then the
>>> +                 * last element needs to be moved to where i was
>>> +                 * before the array can safely be truncated.
>>> +                 */
>>> +                if (i != j)
>>> +                    state->private_objs[i] =
>>> +                        state->private_objs[j];
>>>
>>> -            if (old_dm_state->context)
>>> -                dc_retain_state(old_dm_state->context);
>>> +                state->private_objs[j].ptr = NULL;
>>> +                state->private_objs[j].state = NULL;
>>> +                state->private_objs[j].old_state = NULL;
>>> +                state->private_objs[j].new_state = NULL;
>>> +
>>> +                state->num_private_objs = j;
>>> +                break;
>>> +            }
>>>           }
>>>       }
>>>
>>> -- 
>>> 2.27.0
>>>
>>
>



More information about the dri-devel mailing list