[PATCH] drm/amd/display: avoid sleeping in atomic context while creating new context or state
Christian König
ckoenig.leichtzumerken at gmail.com
Fri Jun 1 10:10:46 UTC 2018
Ok, that works for me as well.
Please always check if it is really necessary before adding any
GFP_ATOMIC allocation, cause that is rather invasive and should be avoided.
Christian.
Am 01.06.2018 um 11:56 schrieb S, Shirish:
>
> The V2 of this patch is already reviewed by Harry.
> The change i have made in dc_create() is no more applicable.
>
> Regards,
> Shirish S
> On 5/31/2018 11:35 PM, Christian König wrote:
>> Am 30.05.2018 um 18:03 schrieb Harry Wentland:
>>> On 2018-05-30 06:17 AM, Shirish S wrote:
>>>> This patch fixes the warning messages that are caused due to calling
>>>> sleep in atomic context as below:
>>>>
>>>> BUG: sleeping function called from invalid context at mm/slab.h:419
>>>> in_atomic(): 1, irqs_disabled(): 1, pid: 5, name: kworker/u4:0
>>>> CPU: 1 PID: 5 Comm: kworker/u4:0 Tainted: G W 4.14.35 #941
>>>> Workqueue: events_unbound commit_work
>>>> Call Trace:
>>>> dump_stack+0x4d/0x63
>>>> ___might_sleep+0x11f/0x12e
>>>> kmem_cache_alloc_trace+0x41/0xea
>>>> dc_create_state+0x1f/0x30
>>>> dc_commit_updates_for_stream+0x73/0x4cf
>>>> ? amdgpu_get_crtc_scanoutpos+0x82/0x16b
>>>> amdgpu_dm_do_flip+0x239/0x298
>>>> amdgpu_dm_commit_planes.isra.23+0x379/0x54b
>>>> ? dc_commit_state+0x3da/0x404
>>>> amdgpu_dm_atomic_commit_tail+0x4fc/0x5d2
>>>> ? wait_for_common+0x5b/0x69
>>>> commit_tail+0x42/0x64
>>>> process_one_work+0x1b0/0x314
>>>> worker_thread+0x1cb/0x2c1
>>>> ? create_worker+0x1da/0x1da
>>>> kthread+0x156/0x15e
>>>> ? kthread_flush_work+0xea/0xea
>>>> ret_from_fork+0x22/0x40
>>>>
>>>> Signed-off-by: Shirish S <shirish.s at amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/display/dc/core/dc.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c
>>>> b/drivers/gpu/drm/amd/display/dc/core/dc.c
>>>> index 33149ed..d62206f 100644
>>>> --- a/drivers/gpu/drm/amd/display/dc/core/dc.c
>>>> +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
>>>> @@ -588,7 +588,7 @@ static void disable_dangling_plane(struct dc
>>>> *dc, struct dc_state *context)
>>>> struct dc *dc_create(const struct dc_init_data *init_params)
>>>> {
>>>> - struct dc *dc = kzalloc(sizeof(*dc), GFP_KERNEL);
>>>> + struct dc *dc = kzalloc(sizeof(*dc), GFP_ATOMIC);
>>> Are you sure this one can be called in atomic_context?
>>>
>>> If so then everything in consstruct() would also need GFP_ATOMIC.
>>
>> Well the backtrace is quite obvious, but I agree that change still
>> looks fishy to me as well.
>>
>> Using GFP_ATOMIC should only be a last resort when nothing else
>> helps, but here it looks more like we misuse a spinlock where a mutex
>> or semaphore would be more appropriate.
>>
>> Where exactly becomes the context atomic in the call trace?
>>
>> Christian.
>>
>>>
>>> Harry
>>>
>>>> unsigned int full_pipe_count;
>>>> if (NULL == dc)
>>>> @@ -937,7 +937,7 @@ bool dc_post_update_surfaces_to_stream(struct
>>>> dc *dc)
>>>> struct dc_state *dc_create_state(void)
>>>> {
>>>> struct dc_state *context = kzalloc(sizeof(struct dc_state),
>>>> - GFP_KERNEL);
>>>> + GFP_ATOMIC);
>>>> if (!context)
>>>> return NULL;
>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx at lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>
More information about the amd-gfx
mailing list