[PATCH 01/26] drm/dp_mst: Move link address dumping into a function

Lyude Paul lyude at redhat.com
Mon Aug 26 21:51:26 UTC 2019


*sigh* finally have some time to go through these reviews

jfyi: I realized after looking over this patch that it's not actually needed -
I had been planning on using drm_dp_dump_link_address() for other things, but
ended up deciding to make the final plan to use something that dumps into a
format that's identical to the one we're using for dumping DOWN requests. IMHO
though, this patch does make things look nicer so I'll probably keep it.

Assuming I can still count your r-b as valid with a change to the commit
description?

On Thu, 2019-08-08 at 21:53 +0200, Daniel Vetter wrote:
> On Wed, Jul 17, 2019 at 09:42:24PM -0400, Lyude Paul wrote:
> > Since we're about to be calling this from multiple places. Also it makes
> > things easier to read!
> > 
> > Cc: Juston Li <juston.li at intel.com>
> > Cc: Imre Deak <imre.deak at intel.com>
> > Cc: Ville Syrjälä <ville.syrjala at linux.intel.com>
> > Cc: Harry Wentland <hwentlan at amd.com>
> > Signed-off-by: Lyude Paul <lyude at redhat.com>
> 
> Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 35 ++++++++++++++++++---------
> >  1 file changed, 23 insertions(+), 12 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 0984b9a34d55..998081b9b205 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -2013,6 +2013,28 @@ static void drm_dp_queue_down_tx(struct
> > drm_dp_mst_topology_mgr *mgr,
> >  	mutex_unlock(&mgr->qlock);
> >  }
> >  
> > +static void
> > +drm_dp_dump_link_address(struct drm_dp_link_address_ack_reply *reply)
> > +{
> > +	struct drm_dp_link_addr_reply_port *port_reply;
> > +	int i;
> > +
> > +	for (i = 0; i < reply->nports; i++) {
> > +		port_reply = &reply->ports[i];
> > +		DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev:
> > %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n",
> > +			      i,
> > +			      port_reply->input_port,
> > +			      port_reply->peer_device_type,
> > +			      port_reply->port_number,
> > +			      port_reply->dpcd_revision,
> > +			      port_reply->mcs,
> > +			      port_reply->ddps,
> > +			      port_reply->legacy_device_plug_status,
> > +			      port_reply->num_sdp_streams,
> > +			      port_reply->num_sdp_stream_sinks);
> > +	}
> > +}
> > +
> >  static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
> >  				     struct drm_dp_mst_branch *mstb)
> >  {
> > @@ -2038,18 +2060,7 @@ static void drm_dp_send_link_address(struct
> > drm_dp_mst_topology_mgr *mgr,
> >  			DRM_DEBUG_KMS("link address nak received\n");
> >  		} else {
> >  			DRM_DEBUG_KMS("link address reply: %d\n", txmsg-
> > >reply.u.link_addr.nports);
> > -			for (i = 0; i < txmsg->reply.u.link_addr.nports; i++)
> > {
> > -				DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn:
> > %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", i,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].input_port,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].peer_device_type,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].port_number,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].dpcd_revision,
> > -				       txmsg->reply.u.link_addr.ports[i].mcs,
> > -				       txmsg->reply.u.link_addr.ports[i].ddps,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].legacy_device_plug_status,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].num_sdp_streams,
> > -				       txmsg-
> > >reply.u.link_addr.ports[i].num_sdp_stream_sinks);
> > -			}
> > +			drm_dp_dump_link_address(&txmsg->reply.u.link_addr);
> >  
> >  			drm_dp_check_mstb_guid(mstb, txmsg-
> > >reply.u.link_addr.guid);
> >  
> > -- 
> > 2.21.0
> > 
-- 
Cheers,
	Lyude Paul



More information about the dri-devel mailing list