[Intel-gfx] [PATCH] drm/i915: Calculate common rates and max lane count in Long pulse handler

Manasi Navare manasi.d.navare at intel.com
Wed Dec 7 23:10:47 UTC 2016


On Thu, Dec 08, 2016 at 12:04:21AM +0200, Jani Nikula wrote:
> On Wed, 07 Dec 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
> > Hi Jani,
> >
> > Actually this patch is no longer valid, I have included a differnt interface
> > change with the link training patch series for calculating the max_sink_link_rate
> > and max_sink_lane_count in the long pulse handler and not recompute
> > it everytime when the caller needs common_rates and lane_count. These max values
> > will be used in the computation of common_rates when caller requests it and
> > same for lane count.
> >
> > Could you please review the latest patch:
> 
> I'm really not amused by the firehose rate of permutations of the
> patches. I've literally lost count how many versions and variations I've
> reviewed. *sigh*.
> 
> If you repeatedly ping me on the mailing list and IRC to review, and
> specifically say you'll rebase the other stuff on this one, then please
> *also* specifically say if you don't pursue a patch anymore.
> 
> Jani.
>

Sorry about that Jani. I moved it to become part of the link training patch series
instead because of the feedback I received from Daniel on IRC and I just assumed that you
must have seen it. But I will specifically call it out next time,
So now could you review the patch as mentioned below?

Regards
Manasi
 
> >
> > https://patchwork.freedesktop.org/patch/125744/
> >
> > Also this simplifies the fallback logic on link train
> > failure since I can update the max sink link rate and max sink lane count
> > to match the fallback values thus lowering the max capabilities advertised
> > by the sink.
> >
> > Manasi
> >
> > On Wed, Dec 07, 2016 at 11:25:29PM +0200, Jani Nikula wrote:
> >> On Fri, 02 Dec 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
> >> > Supported link rate common to source and sink as well as the
> >> > maximum supported lane count based on source and sink capabilities should
> >> > be set only once on hotplug and then used anytime they are requested.
> >> > This patch creates and array of common rates and max lane count as the
> >> > intel_dp member. It gets calculated only once in the long pulse handler
> >> > and then gets used when requested in compute_config and other functions.
> >> >
> >> > Cc: Jani Nikula <jani.nikula at linux.intel.com>
> >> > Cc: Daniel Vetter <daniel.vetter at intel.com>
> >> > Cc: Ville Syrjala <ville.syrjala at linux.intel.com>
> >> > Signed-off-by: Manasi Navare <manasi.d.navare at intel.com>
> >> > ---
> >> >  drivers/gpu/drm/i915/intel_dp.c  | 38 ++++++++++++++++----------------------
> >> >  drivers/gpu/drm/i915/intel_drv.h |  5 +++++
> >> >  2 files changed, 21 insertions(+), 22 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> >> > index 9dfbde4..de37807 100644
> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> >> > @@ -274,8 +274,7 @@ static int intersect_rates(const int *source_rates, int source_len,
> >> >  	return k;
> >> >  }
> >> >  
> >> > -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> >> > -				 int *common_rates)
> >> > +static int intel_dp_common_rates(struct intel_dp *intel_dp)
> >> 
> >> This here is a bad interface change. After this, you can only use the
> >> function to update intel_dp->common_rates. However, this doesn't finish
> >> the job, and leaves it up to the caller to set intel_dp->common_len. The
> >> sole remaining call site of intel_dp_common_rates() should set the alarm
> >> bells ringing.
> >> 
> >> Please keep it as a helper, passing in common_rates. There's follow-up
> >> work to be done on top, but we can leave that for later.
> >> 
> >> >  {
> >> >  	const int *source_rates, *sink_rates;
> >> >  	int source_len, sink_len;
> >> > @@ -285,7 +284,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
> >> >  
> >> >  	return intersect_rates(source_rates, source_len,
> >> >  			       sink_rates, sink_len,
> >> > -			       common_rates);
> >> > +			       intel_dp->common_rates);
> >> >  }
> >> >  
> >> >  static enum drm_mode_status
> >> > @@ -312,7 +311,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
> >> >  	}
> >> >  
> >> >  	max_link_clock = intel_dp_max_link_rate(intel_dp);
> >> > -	max_lanes = intel_dp_max_lane_count(intel_dp);
> >> > +	max_lanes = intel_dp->max_lane_count;
> >> >  
> >> >  	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
> >> >  	mode_rate = intel_dp_link_required(target_clock, 18);
> >> > @@ -1430,8 +1429,7 @@ static void snprintf_int_array(char *str, size_t len,
> >> >  static void intel_dp_print_rates(struct intel_dp *intel_dp)
> >> >  {
> >> >  	const int *source_rates, *sink_rates;
> >> > -	int source_len, sink_len, common_len;
> >> > -	int common_rates[DP_MAX_SUPPORTED_RATES];
> >> > +	int source_len, sink_len;
> >> >  	char str[128]; /* FIXME: too big for stack? */
> >> >  
> >> >  	if ((drm_debug & DRM_UT_KMS) == 0)
> >> > @@ -1445,8 +1443,8 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
> >> >  	snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
> >> >  	DRM_DEBUG_KMS("sink rates: %s\n", str);
> >> >  
> >> > -	common_len = intel_dp_common_rates(intel_dp, common_rates);
> >> > -	snprintf_int_array(str, sizeof(str), common_rates, common_len);
> >> > +	snprintf_int_array(str, sizeof(str), intel_dp->common_rates,
> >> > +			   intel_dp->common_len);
> >> >  	DRM_DEBUG_KMS("common rates: %s\n", str);
> >> >  }
> >> >  
> >> > @@ -1495,14 +1493,12 @@ static int rate_to_index(int find, const int *rates)
> >> >  int
> >> >  intel_dp_max_link_rate(struct intel_dp *intel_dp)
> >> >  {
> >> > -	int rates[DP_MAX_SUPPORTED_RATES] = {};
> >> > -	int len;
> >> > +	int len = intel_dp->common_len;
> >> >  
> >> > -	len = intel_dp_common_rates(intel_dp, rates);
> >> >  	if (WARN_ON(len <= 0))
> >> >  		return 162000;
> >> >  
> >> > -	return rates[len - 1];
> >> > +	return intel_dp->common_rates[len - 1];
> >> >  }
> >> >  
> >> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
> >> > @@ -1550,22 +1546,18 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp,
> >> >  	struct intel_connector *intel_connector = intel_dp->attached_connector;
> >> >  	int lane_count, clock;
> >> >  	int min_lane_count = 1;
> >> > -	int max_lane_count = intel_dp_max_lane_count(intel_dp);
> >> > +	int max_lane_count = intel_dp->max_lane_count;
> >> >  	/* Conveniently, the link BW constants become indices with a shift...*/
> >> >  	int min_clock = 0;
> >> >  	int max_clock;
> >> >  	int bpp, mode_rate;
> >> >  	int link_avail, link_clock;
> >> > -	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> >> > -	int common_len;
> >> >  	uint8_t link_bw, rate_select;
> >> >  
> >> > -	common_len = intel_dp_common_rates(intel_dp, common_rates);
> >> > -
> >> >  	/* No common link rates between source and sink */
> >> > -	WARN_ON(common_len <= 0);
> >> > +	WARN_ON(intel_dp->common_len <= 0);
> >> 
> >> This would be better right after setting intel_dp->common_len.
> >> 
> >> >  
> >> > -	max_clock = common_len - 1;
> >> > +	max_clock = intel_dp->common_len - 1;
> >> >  
> >> >  	if (HAS_PCH_SPLIT(dev_priv) && !HAS_DDI(dev_priv) && port != PORT_A)
> >> >  		pipe_config->has_pch_encoder = true;
> >> > @@ -1597,7 +1589,7 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp,
> >> >  
> >> >  	DRM_DEBUG_KMS("DP link computation with max lane count %i "
> >> >  		      "max bw %d pixel clock %iKHz\n",
> >> > -		      max_lane_count, common_rates[max_clock],
> >> > +		      max_lane_count, intel_dp->common_rates[max_clock],
> >> >  		      adjusted_mode->crtc_clock);
> >> >  
> >> >  	/* Walk through all bpp values. Luckily they're all nicely spaced with 2
> >> > @@ -1633,7 +1625,7 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp,
> >> >  				lane_count <= max_lane_count;
> >> >  				lane_count <<= 1) {
> >> >  
> >> > -				link_clock = common_rates[clock];
> >> > +				link_clock = intel_dp->common_rates[clock];
> >> >  				link_avail = intel_dp_max_data_rate(link_clock,
> >> >  								    lane_count);
> >> >  
> >> > @@ -1663,7 +1655,7 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp,
> >> >  	pipe_config->lane_count = lane_count;
> >> >  
> >> >  	pipe_config->pipe_bpp = bpp;
> >> > -	pipe_config->port_clock = common_rates[clock];
> >> > +	pipe_config->port_clock = intel_dp->common_rates[clock];
> >> >  
> >> >  	intel_dp_compute_rate(intel_dp, pipe_config->port_clock,
> >> >  			      &link_bw, &rate_select);
> >> > @@ -4400,7 +4392,9 @@ static bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
> >> >  		      yesno(intel_dp_source_supports_hbr2(intel_dp)),
> >> >  		      yesno(drm_dp_tps3_supported(intel_dp->dpcd)));
> >> >  
> >> > +	intel_dp->common_len = intel_dp_common_rates(intel_dp);
> >> 
> >> See how this looks bad, as updating of the length and contents are
> >> divided between caller and callee?
> >> 
> >> >  	intel_dp_print_rates(intel_dp);
> >> > +	intel_dp->max_lane_count = intel_dp_max_lane_count(intel_dp);
> >> >  
> >> >  	intel_dp_read_desc(intel_dp);
> >> >  
> >> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> >> > index 1d126c2..7c8885e 100644
> >> > --- a/drivers/gpu/drm/i915/intel_drv.h
> >> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> >> > @@ -906,6 +906,11 @@ struct intel_dp {
> >> >  	/* sink rates as reported by DP_SUPPORTED_LINK_RATES */
> >> >  	uint8_t num_sink_rates;
> >> >  	int sink_rates[DP_MAX_SUPPORTED_RATES];
> >> > +	/* supported link rates common between source and sink */
> >> > +	int common_rates[DP_MAX_SUPPORTED_RATES];
> >> > +	int common_len;
> >> > +	/* Maximum supported lane count common between source and sink */
> >> > +	uint8_t max_lane_count;
> >> >  	/* sink or branch descriptor */
> >> >  	struct intel_dp_desc desc;
> >> >  	struct drm_dp_aux aux;
> >> 
> >> -- 
> >> Jani Nikula, Intel Open Source Technology Center
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center


More information about the Intel-gfx mailing list