[Intel-gfx] [PATCH] drm/dp_mst: Clear MSG_RDY flag before sending new message
Wayne Lin
Wayne.Lin at amd.com
Tue Apr 18 06:09:05 UTC 2023
[Why & How]
The sequence for collecting down_reply/up_request from source
perspective should be:
Request_n->repeat (get partial reply of Request_n->clear message ready
flag to ack DPRX that the message is received) till all partial
replies for Request_n are received->new Request_n+1.
While assembling partial reply packets, reading out DPCD DOWN_REP
Sideband MSG buffer + clearing DOWN_REP_MSG_RDY flag should be
wrapped up as a complete operation for reading out a reply packet.
Kicking off a new request before clearing DOWN_REP_MSG_RDY flag might
be risky. e.g. If the reply of the new request has overwritten the
DPRX DOWN_REP Sideband MSG buffer before source writing ack to clear
DOWN_REP_MSG_RDY flag, source then unintentionally flushes the reply
for the new request. Should handle the up request in the same way.
In drm_dp_mst_hpd_irq(), we don't clear MSG_RDY flag before caliing
drm_dp_mst_kick_tx(). Fix that.
Signed-off-by: Wayne Lin <Wayne.Lin at amd.com>
Cc: stable at vger.kernel.org
---
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
drivers/gpu/drm/display/drm_dp_mst_topology.c | 22 +++++++++++++++++++
drivers/gpu/drm/i915/display/intel_dp.c | 3 +++
drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 ++
4 files changed, 29 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 77277d90b6e2..5313a5656598 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3166,6 +3166,8 @@ static void dm_handle_mst_sideband_msg(struct amdgpu_dm_connector *aconnector)
for (retry = 0; retry < 3; retry++) {
uint8_t wret;
+ /* MSG_RDY ack is done in drm*/
+ esi[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
wret = drm_dp_dpcd_write(
&aconnector->dm_dp_aux.aux,
dpcd_addr + 1,
diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
index 51a46689cda7..02aad713c67c 100644
--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
@@ -4054,6 +4054,9 @@ int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handl
{
int ret = 0;
int sc;
+ const int tosend = 1;
+ int retries = 0;
+ u8 buf = 0;
*handled = false;
sc = DP_GET_SINK_COUNT(esi[0]);
@@ -4072,6 +4075,25 @@ int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handl
*handled = true;
}
+ if (*handled) {
+ buf = esi[1] & (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
+ do {
+ ret = drm_dp_dpcd_write(mgr->aux,
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
+ &buf,
+ tosend);
+
+ if (ret == tosend)
+ break;
+
+ retries++;
+ } while (retries < 5);
+
+ if (ret != tosend)
+ drm_dbg_kms(mgr->dev, "failed to write dpcd 0x%x\n",
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0);
+ }
+
drm_dp_mst_kick_tx(mgr);
return ret;
}
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index bf80f296a8fd..abec3de38b66 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -3939,6 +3939,9 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
if (!memchr_inv(ack, 0, sizeof(ack)))
break;
+ /* MSG_RDY ack is done in drm*/
+ ack[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
+
if (!intel_dp_ack_sink_irq_esi(intel_dp, ack))
drm_dbg_kms(&i915->drm, "Failed to ack ESI\n");
}
diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
index edcb2529b402..e905987104ed 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
@@ -1336,6 +1336,8 @@ nv50_mstm_service(struct nouveau_drm *drm,
if (!handled)
break;
+ /* MSG_RDY ack is done in drm*/
+ esi[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
rc = drm_dp_dpcd_write(aux, DP_SINK_COUNT_ESI + 1, &esi[1],
3);
if (rc != 3) {
--
2.37.3
More information about the Intel-gfx
mailing list