[PATCH] drm/dp_mst: Have DP_Tx send one msg at a time
Lin, Wayne
Wayne.Lin at amd.com
Thu Jan 16 02:09:41 UTC 2020
[AMD Public Use]
Appreciate for your time.
Thanks!
> -----Original Message-----
> From: Lyude Paul <lyude at redhat.com>
> Sent: Thursday, January 16, 2020 5:58 AM
> To: Lin, Wayne <Wayne.Lin at amd.com>; dri-devel at lists.freedesktop.org;
> amd-gfx at lists.freedesktop.org
> Cc: Kazlauskas, Nicholas <Nicholas.Kazlauskas at amd.com>; Wentland, Harry
> <Harry.Wentland at amd.com>; Zuo, Jerry <Jerry.Zuo at amd.com>
> Subject: Re: [PATCH] drm/dp_mst: Have DP_Tx send one msg at a time
>
> Reviewed-by: Lyude Paul <lyude at redhat.com>
>
> I will push this to drm-misc-fixes in just a moment, thanks!
>
> On Mon, 2020-01-13 at 17:36 +0800, Wayne Lin wrote:
> > [Why]
> > Noticed this while testing MST with the 4 ports MST hub from
> > StarTech.com. Sometimes can't light up monitors normally and get the
> > error message as 'sideband msg build failed'.
> >
> > Look into aux transactions, found out that source sometimes will send
> > out another down request before receiving the down reply of the
> > previous down request. On the other hand, in drm_dp_get_one_sb_msg(),
> > current code doesn't handle the interleaved replies case. Hence,
> > source can't build up message completely and can't light up monitors.
> >
> > [How]
> > For good compatibility, enforce source to send out one down request at
> > a time. Add a flag, is_waiting_for_dwn_reply, to determine if the
> > source can send out a down request immediately or not.
> >
> > - Check the flag before calling process_single_down_tx_qlock to send
> > out a msg
> > - Set the flag when successfully send out a down request
> > - Clear the flag when successfully build up a down reply
> > - Clear the flag when find erros during sending out a down request
> > - Clear the flag when find errors during building up a down reply
> > - Clear the flag when timeout occurs during waiting for a down reply
> > - Use drm_dp_mst_kick_tx() to try to send another down request in
> > queue at the end of drm_dp_mst_wait_tx_reply() (attempt to send out
> > messages in queue when errors occur)
> >
> > Cc: Lyude Paul <lyude at redhat.com>
> > Signed-off-by: Wayne Lin <Wayne.Lin at amd.com>
> > ---
> > drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++++++++++++--
> > include/drm/drm_dp_mst_helper.h | 6 ++++++
> > 2 files changed, 18 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 4395d5cc0645..3542af15387a 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -1195,6 +1195,8 @@ static int drm_dp_mst_wait_tx_reply(struct
> > drm_dp_mst_branch *mstb,
> > txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
> > mstb->tx_slots[txmsg->seqno] = NULL;
> > }
> > + mgr->is_waiting_for_dwn_reply = false;
> > +
> > }
> > out:
> > if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) { @@
> > -1204,6 +1206,7 @@ static int drm_dp_mst_wait_tx_reply(struct
> > drm_dp_mst_branch *mstb,
> > }
> > mutex_unlock(&mgr->qlock);
> >
> > + drm_dp_mst_kick_tx(mgr);
> > return ret;
> > }
> >
> > @@ -2770,9 +2773,11 @@ static void process_single_down_tx_qlock(struct
> > drm_dp_mst_topology_mgr *mgr)
> > ret = process_single_tx_qlock(mgr, txmsg, false);
> > if (ret == 1) {
> > /* txmsg is sent it should be in the slots now */
> > + mgr->is_waiting_for_dwn_reply = true;
> > list_del(&txmsg->next);
> > } else if (ret) {
> > DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
> > + mgr->is_waiting_for_dwn_reply = false;
> > list_del(&txmsg->next);
> > if (txmsg->seqno != -1)
> > txmsg->dst->tx_slots[txmsg->seqno] = NULL; @@ -2812,7
> +2817,8 @@
> > static void drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,
> > drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
> > }
> >
> > - if (list_is_singular(&mgr->tx_msg_downq))
> > + if (list_is_singular(&mgr->tx_msg_downq) &&
> > + !mgr->is_waiting_for_dwn_reply)
> > process_single_down_tx_qlock(mgr);
> > mutex_unlock(&mgr->qlock);
> > }
> > @@ -3743,6 +3749,7 @@ static int drm_dp_mst_handle_down_rep(struct
> > drm_dp_mst_topology_mgr *mgr)
> > mutex_lock(&mgr->qlock);
> > txmsg->state = DRM_DP_SIDEBAND_TX_RX;
> > mstb->tx_slots[slot] = NULL;
> > + mgr->is_waiting_for_dwn_reply = false;
> > mutex_unlock(&mgr->qlock);
> >
> > wake_up_all(&mgr->tx_waitq);
> > @@ -3752,6 +3759,9 @@ static int drm_dp_mst_handle_down_rep(struct
> > drm_dp_mst_topology_mgr *mgr)
> > no_msg:
> > drm_dp_mst_topology_put_mstb(mstb);
> > clear_down_rep_recv:
> > + mutex_lock(&mgr->qlock);
> > + mgr->is_waiting_for_dwn_reply = false;
> > + mutex_unlock(&mgr->qlock);
> > memset(&mgr->down_rep_recv, 0, sizeof(struct
> > drm_dp_sideband_msg_rx));
> >
> > return 0;
> > @@ -4591,7 +4601,7 @@ static void drm_dp_tx_work(struct work_struct
> *work)
> > struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct
> > drm_dp_mst_topology_mgr, tx_work);
> >
> > mutex_lock(&mgr->qlock);
> > - if (!list_empty(&mgr->tx_msg_downq))
> > + if (!list_empty(&mgr->tx_msg_downq) &&
> > +!mgr->is_waiting_for_dwn_reply)
> > process_single_down_tx_qlock(mgr);
> > mutex_unlock(&mgr->qlock);
> > }
> > diff --git a/include/drm/drm_dp_mst_helper.h
> > b/include/drm/drm_dp_mst_helper.h index 942575de86a0..d0b45468135a
> > 100644
> > --- a/include/drm/drm_dp_mst_helper.h
> > +++ b/include/drm/drm_dp_mst_helper.h
> > @@ -610,6 +610,12 @@ struct drm_dp_mst_topology_mgr {
> > * &drm_dp_sideband_msg_tx.state once they are queued
> > */
> > struct mutex qlock;
> > +
> > + /**
> > + * @is_waiting_for_dwn_reply: indicate whether is waiting for down
> > reply
> > + */
> > + bool is_waiting_for_dwn_reply;
> > +
> > /**
> > * @tx_msg_downq: List of pending down replies.
> > */
> --
> Cheers,
> Lyude Paul
--
Best regards,
Wayne Lin
More information about the amd-gfx
mailing list