[PATCH AUTOSEL 6.0 27/30] drm/amd/display: Remove wrong pipe control lock
Sasha Levin
sashal at kernel.org
Sun Nov 6 17:03:39 UTC 2022
From: Rodrigo Siqueira <Rodrigo.Siqueira at amd.com>
[ Upstream commit ca08a1725d0d78efca8d2dbdbce5ea70355da0f2 ]
When using a device based on DCN32/321,
we have an issue where a second
4k at 60Hz display does not light up,
and the system becomes unresponsive
for a few minutes. In the debug process,
it was possible to see a hang
in the function dcn20_post_unlock_program_front_end
in this part:
for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000
&& hubp->funcs->hubp_is_flip_pending(hubp); j++)
mdelay(1);
}
The hubp_is_flip_pending always returns positive
for waiting pending flips which is a symptom of
pipe hang. Additionally, the dmesg log shows
this message after a few minutes:
BUG: soft lockup - CPU#4 stuck for 26s!
...
[ +0.000003] dcn20_post_unlock_program_front_end+0x112/0x340 [amdgpu]
[ +0.000171] dc_commit_state_no_check+0x63d/0xbf0 [amdgpu]
[ +0.000155] ? dc_validate_global_state+0x358/0x3d0 [amdgpu]
[ +0.000154] dc_commit_state+0xe2/0xf0 [amdgpu]
This confirmed the hypothesis that we had a pipe
hanging somewhere. Next, after checking the
ftrace entries, we have the below weird
sequence:
[..]
2) | dcn10_lock_all_pipes [amdgpu]() {
2) 0.120 us | optc1_is_tg_enabled [amdgpu]();
2) | dcn20_pipe_control_lock [amdgpu]() {
2) | dc_dmub_srv_clear_inbox0_ack [amdgpu]() {
2) 0.121 us | amdgpu_dm_dmub_reg_write [amdgpu]();
2) 0.551 us | }
2) | dc_dmub_srv_send_inbox0_cmd [amdgpu]() {
2) 0.110 us | amdgpu_dm_dmub_reg_write [amdgpu]();
2) 0.511 us | }
2) | dc_dmub_srv_wait_for_inbox0_ack [amdgpu]() {
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu]();
[..]
We are not expected to read from dmub register
so many times and for so long. From the trace log,
it was possible to identify that the function
dcn20_pipe_control_lock was triggering the dmub
operation when it was unnecessary and causing
the hang issue. This commit drops the unnecessary
dmub code and, consequently, fixes the second display not
lighting up the issue.
Tested-by: Daniel Wheeler <daniel.wheeler at amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo at amd.com>
Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira at amd.com>
Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
Signed-off-by: Sasha Levin <sashal at kernel.org>
---
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 598ce872a8d7..0f30df523fdf 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1262,16 +1262,6 @@ void dcn20_pipe_control_lock(
lock,
&hw_locks,
&inst_flags);
- } else if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
- union dmub_inbox0_cmd_lock_hw hw_lock_cmd = { 0 };
- hw_lock_cmd.bits.command_code = DMUB_INBOX0_CMD__HW_LOCK;
- hw_lock_cmd.bits.hw_lock_client = HW_LOCK_CLIENT_DRIVER;
- hw_lock_cmd.bits.lock_pipe = 1;
- hw_lock_cmd.bits.otg_inst = pipe->stream_res.tg->inst;
- hw_lock_cmd.bits.lock = lock;
- if (!lock)
- hw_lock_cmd.bits.should_release = 1;
- dmub_hw_lock_mgr_inbox0_cmd(dc->ctx->dmub_srv, hw_lock_cmd);
} else if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) {
if (lock)
pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg);
@@ -1848,7 +1838,7 @@ void dcn20_post_unlock_program_front_end(
for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000
&& hubp->funcs->hubp_is_flip_pending(hubp); j++)
- mdelay(1);
+ udelay(1);
}
}
--
2.35.1
More information about the dri-devel
mailing list