[PATCH v4 3/3] drm/amd/display: Use requested power state to avoid HPD WA during s0ix

Mario Limonciello mario.limonciello at amd.com
Thu Jan 6 16:30:54 UTC 2022

The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for
hard hang on HPD") causes a regression in s0ix where the system will
fail to resume properly.  This may be because an HPD was active the last
time clocks were updated but clocks didn't get updated again during s0ix.

So add an extra call to update clocks as part of the suspend routine:

In case HPD is set during this time, also check if the call happened during
suspend to allow overriding the WA.

Cc: Qingqing Zhuo <qingqing.zhuo at amd.com>
Reported-by: Scott Bruce <smbruce at gmail.com>
Reported-by: Chris Hixon <linux-kernel-bugs at hixontech.com>
Reported-by: spasswolf at web.de
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215436
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1821
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1852
Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD")
Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to dcn20")
Signed-off-by: Mario Limonciello <mario.limonciello at amd.com>
changes from v3->v4:
 * Move into new function
 * Explicitly check that current_state is active for safety
 * Change metadata from BugLink to Link
changes from v2->v3:
 * stop depending on adev, get value of power state from display core
changes from v1->v2:
 * Add fallthrough statement
 * Extend case to check if call was explicitly in s0ix since #1852 showed hpd_state
   can be set at this time too
 * Adjust commit message and title
 * Add extra commit and bug fixed to metadata
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 4 ++++
 drivers/gpu/drm/amd/display/dc/core/dc.c                  | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 20974829f304..d2e1938555dc 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -129,6 +129,10 @@ static bool check_really_safe_to_lower(struct dc *dc, struct dc_state *context)
 	if (display_count > 0)
 		return false;
+	if (dc->current_state &&
+	    dc->current_state->power_state == DC_ACPI_CM_POWER_STATE_D3)
+		return true;
 	for (irq_src = DC_IRQ_SOURCE_HPD1; irq_src <= DC_IRQ_SOURCE_HPD5; irq_src++) {
 		hpd_state = dc_get_hpd_state_dcn21(dc->res_pool->irqs, irq_src);
 		if (hpd_state)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 8edbb6c70512..716e055a61c9 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -3302,6 +3302,9 @@ void dc_set_power_state(
+		clk_mgr_optimize_pwr_state(dc, dc->clk_mgr);
+		fallthrough;
 		ASSERT(dc->current_state->stream_count == 0);
 		/* Zero out the current context so that on resume we start with

More information about the amd-gfx mailing list