[Intel-gfx] [PATCH v6 13/28] drm/i915/dp: Compute DSC pipe config in atomic check

Ville Syrjälä ville.syrjala at linux.intel.com
Tue Oct 30 11:46:55 UTC 2018


On Mon, Oct 29, 2018 at 04:08:43PM -0700, Manasi Navare wrote:
> On Mon, Oct 29, 2018 at 10:34:58PM +0200, Ville Syrjälä wrote:
> > On Mon, Oct 29, 2018 at 10:30:39PM +0200, Ville Syrjälä wrote:
> > > On Wed, Oct 24, 2018 at 03:28:25PM -0700, Manasi Navare wrote:
> > > > DSC params like the enable, compressed bpp, slice count and
> > > > dsc_split are added to the intel_crtc_state. These parameters
> > > > are set based on the requested mode and available link parameters
> > > > during the pipe configuration in atomic check phase.
> > > > These values are then later used to populate the remaining DSC
> > > > and RC parameters before enbaling DSC in atomic commit.
> > > > 
> > > > v9:
> > > > * Rebase on top of drm-tip that now uses fast_narrow config
> > > > for edp (Manasi)
> > > > v8:
> > > > * Check for DSC bpc not 0 (manasi)
> > > > 
> > > > v7:
> > > > * Fix indentation in compute_m_n (Manasi)
> > > > 
> > > > v6 (From Gaurav):
> > > > * Remove function call of intel_dp_compute_dsc_params() and
> > > > invoke intel_dp_compute_dsc_params() in the patch where
> > > > it is defined to fix compilation warning (Gaurav)
> > > > 
> > > > v5:
> > > > Add drm_dsc_cfg in intel_crtc_state (Manasi)
> > > > 
> > > > v4:
> > > > * Rebase on refactoring of intel_dp_compute_config on tip (Manasi)
> > > > * Add a comment why we need to check PSR while enabling DSC (Gaurav)
> > > > 
> > > > v3:
> > > > * Check PPR > max_cdclock to use 2 VDSC instances (Ville)
> > > > 
> > > > v2:
> > > > * Add if-else for eDP/DP (Gaurav)
> > > > 
> > > > Cc: Jani Nikula <jani.nikula at linux.intel.com>
> > > > Cc: Ville Syrjala <ville.syrjala at linux.intel.com>
> > > > Cc: Anusha Srivatsa <anusha.srivatsa at intel.com>
> > > > Cc: Gaurav K Singh <gaurav.k.singh at intel.com>
> > > > Signed-off-by: Manasi Navare <manasi.d.navare at intel.com>
> > > > Reviewed-by: Anusha Srivatsa <anusha.srivatsa at intel.com>
> > > > Acked-by: Jani Nikula <jani.nikula at linux.intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/intel_display.c |  20 +++-
> > > >  drivers/gpu/drm/i915/intel_display.h |   3 +-
> > > >  drivers/gpu/drm/i915/intel_dp.c      | 170 ++++++++++++++++++++++-----
> > > >  drivers/gpu/drm/i915/intel_dp_mst.c  |   2 +-
> > > >  4 files changed, 155 insertions(+), 40 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > > > index fe045abb6472..18737bd82b68 100644
> > > > --- a/drivers/gpu/drm/i915/intel_display.c
> > > > +++ b/drivers/gpu/drm/i915/intel_display.c
> > > > @@ -6434,7 +6434,7 @@ static int ironlake_fdi_compute_config(struct intel_crtc *intel_crtc,
> > > >  
> > > >  	pipe_config->fdi_lanes = lane;
> > > >  
> > > > -	intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock,
> > > > +	intel_link_compute_m_n(pipe_config->pipe_bpp, 0, lane, fdi_dotclock,
> > > >  			       link_bw, &pipe_config->fdi_m_n, false);
> > > >  
> > > >  	ret = ironlake_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config);
> > > > @@ -6671,17 +6671,25 @@ static void compute_m_n(unsigned int m, unsigned int n,
> > > >  }
> > > >  
> > > >  void
> > > > -intel_link_compute_m_n(int bits_per_pixel, int nlanes,
> > > > +intel_link_compute_m_n(int bits_per_pixel, uint16_t compressed_bpp,
> > > > +		       int nlanes,
> > > >  		       int pixel_clock, int link_clock,
> > > >  		       struct intel_link_m_n *m_n,
> > > >  		       bool constant_n)
> > > >  {
> > > >  	m_n->tu = 64;
> > > >  
> > > > -	compute_m_n(bits_per_pixel * pixel_clock,
> > > > -		    link_clock * nlanes * 8,
> > > > -		    &m_n->gmch_m, &m_n->gmch_n,
> > > > -		    constant_n);
> > > > +	/* For DSC, Data M/N calculation uses compressed BPP */
> > > > +	if (compressed_bpp)
> > > > +		compute_m_n(compressed_bpp * pixel_clock,
> > > > +			    link_clock * nlanes * 8,
> > > > +			    &m_n->gmch_m, &m_n->gmch_n,
> > > > +			    constant_n);
> > > > +	else
> > > > +		compute_m_n(bits_per_pixel * pixel_clock,
> > > > +			    link_clock * nlanes * 8,
> > > > +			    &m_n->gmch_m, &m_n->gmch_n,
> > > > +			    constant_n);
> > > >  
> > > >  	compute_m_n(pixel_clock, link_clock,
> > > >  		    &m_n->link_m, &m_n->link_n,
> > > > diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h
> > > > index 5d50decbcbb5..b0b23e1e9392 100644
> > > > --- a/drivers/gpu/drm/i915/intel_display.h
> > > > +++ b/drivers/gpu/drm/i915/intel_display.h
> > > > @@ -407,7 +407,8 @@ struct intel_link_m_n {
> > > >  	     (__i)++) \
> > > >  		for_each_if(plane)
> > > >  
> > > > -void intel_link_compute_m_n(int bpp, int nlanes,
> > > > +void intel_link_compute_m_n(int bpp, uint16_t compressed_bpp,
> > > > +			    int nlanes,
> > > >  			    int pixel_clock, int link_clock,
> > > >  			    struct intel_link_m_n *m_n,
> > > >  			    bool constant_n);
> > > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > > > index 6f66a38ba0b2..a88f9371dd32 100644
> > > > --- a/drivers/gpu/drm/i915/intel_dp.c
> > > > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > > > @@ -47,6 +47,8 @@
> > > >  
> > > >  /* DP DSC small joiner has 2 FIFOs each of 640 x 6 bytes */
> > > >  #define DP_DSC_MAX_SMALL_JOINER_RAM_BUFFER	61440
> > > > +#define DP_DSC_MIN_SUPPORTED_BPC		8
> > > > +#define DP_DSC_MAX_SUPPORTED_BPC		10
> > > >  
> > > >  /* DP DSC throughput values used for slice count calculations KPixels/s */
> > > >  #define DP_DSC_PEAK_PIXEL_RATE			2720000
> > > > @@ -1924,6 +1926,16 @@ static int intel_dp_compute_bpp(struct intel_dp *intel_dp,
> > > >  		}
> > > >  	}
> > > >  
> > > > +	/* If DSC is supported, use the max value reported by panel */
> > > > +	if (INTEL_GEN(dev_priv) >= 10 &&
> > > > +	    drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) {
> > > > +		bpc = min_t(u8,
> > > > +			    drm_dp_dsc_sink_max_color_depth(intel_dp->dsc_dpcd),
> > > > +			    DP_DSC_MAX_SUPPORTED_BPC);
> > > > +		if (bpc)
> > > > +			bpp = 3 * bpc;
> > > 
> > > This seems like it should be 'bpp = min(bpp, 3*bpc)'. 
> > > Otherwise we may bump the bpp above a limit we already established earlier.
> > > 
> > > > +	}
> > > > +
> > > >  	return bpp;
> > > >  }
> > > >  
> > > > @@ -1984,14 +1996,11 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
> > > >  				link_clock = intel_dp->common_rates[clock];
> > > >  				link_avail = intel_dp_max_data_rate(link_clock,
> > > >  								    lane_count);
> > > > -
> > > > -				if (mode_rate <= link_avail) {
> > > > -					pipe_config->lane_count = lane_count;
> > > > -					pipe_config->pipe_bpp = bpp;
> > > > -					pipe_config->port_clock = link_clock;
> > > > -
> > > > +				pipe_config->lane_count = lane_count;
> > > > +				pipe_config->pipe_bpp = bpp;
> > > > +				pipe_config->port_clock = link_clock;
> > > > +				if (mode_rate <= link_avail)
> > > >  					return true;
> > > 
> > > Why do we need to assign these if we don't accept the configuration?
> > > 
> > > > -				}
> > > >  			}
> > > >  		}
> > > >  	}
> > > > @@ -2020,14 +2029,11 @@ intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> > > >  				link_clock = intel_dp->common_rates[clock];
> > > >  				link_avail = intel_dp_max_data_rate(link_clock,
> > > >  								    lane_count);
> > > > -
> > > > -				if (mode_rate <= link_avail) {
> > > > -					pipe_config->lane_count = lane_count;
> > > > -					pipe_config->pipe_bpp = bpp;
> > > > -					pipe_config->port_clock = link_clock;
> > > > -
> > > > +				pipe_config->lane_count = lane_count;
> > > > +				pipe_config->pipe_bpp = bpp;
> > > > +				pipe_config->port_clock = link_clock;
> > > > +				if (mode_rate <= link_avail)
> > > >  					return true;
> > > > -				}
> > > >  			}
> > > >  		}
> > > >  	}
> > > > @@ -2035,14 +2041,88 @@ intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> > > >  	return false;
> > > >  }
> > > >  
> > > > +static bool intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> > > > +					struct intel_crtc_state *pipe_config,
> > > > +					struct link_config_limits *limits)
> > > > +{
> > > > +	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
> > > > +	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
> > > > +	struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> > > > +	enum pipe pipe = to_intel_crtc(pipe_config->base.crtc)->pipe;
> > > > +	u16 dsc_max_output_bpp = 0;
> > > > +	u8 dsc_dp_slice_count = 0;
> > > > +
> > > > +	if (INTEL_GEN(dev_priv) < 10 ||
> > > > +	    !drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd))
> > > > +		return false;
> > > > +
> > > > +	/* DP DSC only supported on Pipe B and C */
> > > > +	if (pipe == PIPE_A && !intel_dp_is_edp(intel_dp))
> > > > +		return false;
> > > 
> > > Do we mean 'transcoder == EDP || transcoder == B || transcoder == C' ?
> > > Or maybe 'transcoder != A' for short. I guess this will need to adjusted
> > > for the next platform anyway so making the assumption that transcoder A
> > > is the only one that can't do DSC is fine.
> > > 
> > > This whole thing seems like a good helper function.
> > > intel_dp_source_supports_dsc() or something like that. And then we
> > > could have intel_dp_supports_dsc() which just calls that +
> > > drm_dp_sink_supports_dsc().
> >
> 
> So intel_dp_source_supports_dsc will have platform checks, but for it to do
> transcoder check it would also need crtc_state as input.

Indeed. Do we have that in all the places we need this?

> 
> And yes intel_dp_supports_dsc() can call this and drm_dp_sink_supports_dsc()
>  
> > Another confusion about these checks is glk. Some other places seem
> > to indicate that glk has DSC, but then code like this here doesn't 
> > accept glk. What's up with that?
> 
> This is because on GLK, VDSC is only supported on eDP and not on external DP

But we don't consider DSC when computing any of the link params and
whatnot. If that's intentional then there is something rather
non-obvious about how all this is meant to work. I don't think there
really should be that much of a difference between DP and eDP when
it comes to DSC.

-- 
Ville Syrjälä
Intel


More information about the Intel-gfx mailing list