[PATCH 07/14] drm/i915/dp: Add a separate function to reduce the link parameters

Imre Deak imre.deak at intel.com
Thu Jul 25 12:14:38 UTC 2024


On Thu, Jul 25, 2024 at 06:20:23AM +0300, Murthy, Arun R wrote:
> > -----Original Message-----
> > From: Deak, Imre <imre.deak at intel.com>
> > Sent: Wednesday, July 24, 2024 4:50 PM
> > To: Murthy, Arun R <arun.r.murthy at intel.com>
> > Cc: intel-gfx at lists.freedesktop.org
> > Subject: Re: [PATCH 07/14] drm/i915/dp: Add a separate function to reduce the
> > link parameters
> >
> > On Wed, Jul 24, 2024 at 07:55:03AM +0300, Murthy, Arun R wrote:
> > > > -----Original Message-----
> > > > From: Intel-gfx <intel-gfx-bounces at lists.freedesktop.org> On Behalf
> > > > Of Imre Deak
> > > > Sent: Monday, July 22, 2024 10:25 PM
> > > > To: intel-gfx at lists.freedesktop.org
> > > > Subject: [PATCH 07/14] drm/i915/dp: Add a separate function to
> > > > reduce the link parameters
> > > >
> > > > A follow-up patch will add an alternative way to reduce the link
> > > > parameters in BW order on MST links, prepare for that here.
> > > >
> > > > Signed-off-by: Imre Deak <imre.deak at intel.com>
> > > > ---
> > > >  .../drm/i915/display/intel_dp_link_training.c | 39
> > > > +++++++++++++++----
> > > >  1 file changed, 31 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > index 58dea87a9fa28..57536ae200b77 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > @@ -1193,6 +1193,36 @@ static int reduce_lane_count(struct intel_dp
> > > > *intel_dp, int current_lane_count)
> > > >       return current_lane_count >> 1;  }
> > > >
> > > > +static bool reduce_link_params_in_rate_lane_order(struct intel_dp
> > *intel_dp,
> > > > +                                               const struct
> > > > +intel_crtc_state
> > > > *crtc_state,
> > > > +                                               int *new_link_rate,
> > > > + int
> > > > *new_lane_count) {
> > > > +     int link_rate;
> > > > +     int lane_count;
> > > > +
> > > > +     lane_count = crtc_state->lane_count;
> > > > +     link_rate = reduce_link_rate(intel_dp, crtc_state->port_clock);
> > > > +     if (link_rate < 0) {
> > > > +             lane_count = reduce_lane_count(intel_dp, crtc_state-
> > > > >lane_count);
> > > > +             link_rate = intel_dp_max_common_rate(intel_dp);
> > > > +     }
> > > > +
> > >
> > > On link training failure reducing link rate or lane count is not
> > > linear. Sometime we fall from uhbr to hbr and then again with uhbr
> > > with lane reduction. So would it be better to have a table/list for
> > > the fallback link rate/lane count.
> >
> > This patch is meant to to keep the current way of reducing the rate and lane
> > count, just preparing for a follow-up change that adds the alternetive BW order
> > fallback logic for MST. I think later SST would need to be switched to the logic as
> > well, for now I didn't want to change this.
> >
> This series enables fallback for 128b/132b as well and the fallback
> linkrate and lanecount values for them are not in linear reducing
> manner.  Can we have a TODO in this function about this?

Not sure what you mean. I can add a TODO comment to change the fallback
logic on SST links to happen the same way as on MST links, decreasing
the link's BW at each step.

> Thanks and Regards,
> Arun R Murthy
> --------------------
> > >
> > > Thanks and Regards,
> > > Arun R Murthy
> > > --------------------
> > > > +     if (lane_count < 0)
> > > > +             return false;
> > > > +
> > > > +     *new_link_rate = link_rate;
> > > > +     *new_lane_count = lane_count;
> > > > +
> > > > +     return true;
> > > > +}
> > > > +
> > > > +static bool reduce_link_params(struct intel_dp *intel_dp, const
> > > > +struct
> > > > intel_crtc_state *crtc_state,
> > > > +                            int *new_link_rate, int *new_lane_count) {
> > > > +     return reduce_link_params_in_rate_lane_order(intel_dp, crtc_state,
> > > > +                                                  new_link_rate,
> > > > new_lane_count); }
> > > > +
> > > >  static int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
> > > >                                                  const struct
> > > > intel_crtc_state
> > > > *crtc_state)  { @@ -1206,14 +1236,7 @@ static int
> > > > intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
> > > >               return 0;
> > > >       }
> > > >
> > > > -     new_lane_count = crtc_state->lane_count;
> > > > -     new_link_rate = reduce_link_rate(intel_dp, crtc_state->port_clock);
> > > > -     if (new_link_rate < 0) {
> > > > -             new_lane_count = reduce_lane_count(intel_dp, crtc_state-
> > > > >lane_count);
> > > > -             new_link_rate = intel_dp_max_common_rate(intel_dp);
> > > > -     }
> > > > -
> > > > -     if (new_lane_count < 0)
> > > > +     if (!reduce_link_params(intel_dp, crtc_state, &new_link_rate,
> > > > +&new_lane_count))
> > > >               return -1;
> > > >
> > > >       if (intel_dp_is_edp(intel_dp) &&
> > > > --
> > > > 2.44.2
> > >


More information about the Intel-gfx mailing list