[Intel-gfx] [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
Jani Nikula
jani.nikula at linux.intel.com
Thu Sep 29 14:52:06 UTC 2016
On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
> On Wed, Sep 28, 2016 at 10:38:37AM +0300, Jani Nikula wrote:
>> On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
>> > On Mon, Sep 26, 2016 at 04:41:27PM +0300, Jani Nikula wrote:
>> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
>> >> > While configuring the pipe during modeset, it should use
>> >> > max clock and max lane count and reduce the bpp until
>> >> > the requested mode rate is less than or equal to
>> >> > available link BW.
>> >> > This is required to pass DP Compliance.
>> >>
>> >> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
>> >> link policy maker can freely choose the link parameters as long as the
>> >> sink supports them.
>> >>
>> >> BR,
>> >> Jani.
>> >>
>> >>
>> >
>> > Thanks for your review feedback.
>> > This change was driven by Video Pattern generation tests in CTS spec. Eg: In
>> > test 4.3.3.1, the test requests 640x480 @ max link rate of 2.7Gbps and 4 lanes.
>> > The test will pass if it sets the link rate to 2.7 and lane count = 4.
>> > But in the existing implementation, this video mode request triggers a modeset
>> > but the compute_config function starts with the lowest link rate and lane count and
>> > trains the link at 1.62 and 4 lanes which does not match the expeced values of link
>> > rate = 2.7 and lane count = 4 and the test fails.
>>
>> Again, the test does not require us to use the maximum parameters by
>> default. It allows us to use optimal parameters by default, and use the
>> sink issued automated test request to change the link parameters to what
>> the test wants.
>>
>> Look at the table in CTS 4.3.3.1. There's a test for 640x480 with 1.62
>> Gbps and 1 lane. And then there's a test for 640x480 with 2.7 Gbps and 4
>> lanes. What you're suggesting is to use excessive bandwidth for the mode
>> by default just because the test has been designed to be lax and allow
>> certain parameters at a minimum, instead of requiring optimal
>> parameters.
>>
>> I do not think this is a change we want to make for DP SST, and it is
>> not a DP spec or compliance requirement.
>>
>> BR,
>> Jani.
>>
>>
>
> So if we let the driver chose the optimal link rate and lane count
> then for 640x480 it will chose 1.62 and 4 lanes. So then the automated
> test request will issue the test request for the maximum link rate
> lets say 5.4 and 4 lanes. At this point we will have to re set the
> plls and the clocks to train the link at 5.4link rate and 4 lane count
> before proceeding to handling the video pattern request. Are you
> recommending to doing the entire pll set up and retraining of the link
> here to the target link rate which will be the max link rate?
If we go by the idea in [1], I think this will mean storing the
parameters in test request, and having the userspace do another modeset
(via sending a hotplug uevent), where we'll use the requested
parameters. I'll still need to double check this complies with the CTS,
but my first impression was yes. If the lane/rate do not match what's
expected, the sink will play along until it can do the test request, and
after that it will wait for another write of the lane/rate. Of course,
this will need an userspace that listens to uevents and does modesets,
but this should be the case with your usual desktop environment.
[1] http://mid.mail-archive.com/8737kjlzfr.fsf@intel.com
> What about the tests 4.3.1.4 that expect the link rate to fall back to
> the lower link rate due the forced failures in the CR/Channel EQ
> phases? For these cases we do need upfront link training and starting
> the link training at the upfront values falling back to the lower
> values. What do you think?
Same here, we'll store the failing parameters, prune the modes that need
those parameters, and have the userspace try again.
The really big upside of this approach is that we'll get error
propagation from modeset, and the modeset/training sequence is always
the same.
BR,
Jani.
>
> Regards
> Manasi
>> >
>> > Regards
>> > Manasi
>> >
>> >> >
>> >> > v3:
>> >> > * Add Debug print if requested mode cannot be supported
>> >> > during modeset (Dhinakaran Pandiyan)
>> >> > v2:
>> >> > * Removed the loop since we use max values of clock
>> >> > and lane count (Dhinakaran Pandiyan)
>> >> >
>> >> > Signed-off-by: Manasi Navare <manasi.d.navare at intel.com>
>> >> > ---
>> >> > drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>> >> > 1 file changed, 8 insertions(+), 14 deletions(-)
>> >> >
>> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> >> > index d81c67cb..65b4559 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
>> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
>> >> > @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>> >> > for (; bpp >= 6*3; bpp -= 2*3) {
>> >> > mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>> >> > bpp);
>> >> > + clock = max_clock;
>> >> > + lane_count = max_lane_count;
>> >> > + link_clock = common_rates[clock];
>> >> > + link_avail = intel_dp_max_data_rate(link_clock,
>> >> > + lane_count);
>> >> >
>> >> > - for (clock = min_clock; clock <= max_clock; clock++) {
>> >> > - for (lane_count = min_lane_count;
>> >> > - lane_count <= max_lane_count;
>> >> > - lane_count <<= 1) {
>> >> > -
>> >> > - link_clock = common_rates[clock];
>> >> > - link_avail = intel_dp_max_data_rate(link_clock,
>> >> > - lane_count);
>> >> > -
>> >> > - if (mode_rate <= link_avail) {
>> >> > - goto found;
>> >> > - }
>> >> > - }
>> >> > - }
>> >> > + if (mode_rate <= link_avail)
>> >> > + goto found;
>> >> > }
>> >> >
>> >> > + DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>> >> > return false;
>> >> >
>> >> > found:
>> >>
>> >> --
>> >> Jani Nikula, Intel Open Source Technology Center
>>
>> --
>> Jani Nikula, Intel Open Source Technology Center
--
Jani Nikula, Intel Open Source Technology Center
More information about the Intel-gfx
mailing list