[Intel-gfx] [PATCH 0/5] Handle link training failure during modeset

Manasi Navare manasi.d.navare at intel.com
Thu Nov 17 19:48:33 UTC 2016


On Thu, Nov 17, 2016 at 02:29:30PM +0200, Jani Nikula wrote:
> On Tue, 15 Nov 2016, Manasi Navare <manasi.d.navare at intel.com> wrote:
> > Submitting new series that adds proper commit messages/cover letter
> > and kernel documentation. It also moved the set_link_status function
> > to drm core so other kernel drivers can make use of it.
> >
> > The idea presented in these patches is to address link training failure
> > in a way that:
> > a) changes the current happy day scenario as little as possible, to avoid
> > regressions, b) can be implemented the same way by all drm drivers, c)
> > is still opt-in for the drivers and userspace, and opting out doesn't
> > regress the user experience, d) doesn't prevent drivers from
> > implementing better or alternate approaches, possibly without userspace
> > involvement. And, of course, handles all the issues presented.
> >
> > The solution is to add a "link status" connector property. In the usual
> > happy day scenario, this is always "good". If something fails during or
> > after a mode set, the kernel driver can set the link status to "bad",
> > prune the mode list based on new information as necessary, and send a
> > hotplug uevent for userspace to have it re-check the valid modes through
> > getconnector, and try again. If the theoretical capabilities of the link
> > can't be reached, the mode list is trimmed based on that.
> >
> > If the userspace is not aware of the property, the user experience is
> > the same as it currently is. If the userspace is aware of the property,
> > it has a chance to improve user experience. If a drm driver does not
> > modify the property (it stays "good"), the user experience is the same
> > as it currently is. A drm driver can also choose to try to handle more
> > of the failures in kernel, hardware not limiting, or it can choose to
> > involve userspace more. Up to the drivers.
> >
> > The reason for adding the property is to handle link training failures,
> > but it is not limited to DP or link training. For example, if we
> > implement asynchronous setcrtc, we can use this to report any failures
> > in that.
> >
> > Finally, while DP CTS compliance is advertized (which is great, and
> > could be made to work similarly for all drm drivers), this can be used
> > for the more important goal of improving user experience on link
> > training failures, by avoiding black screens.
> 
> Since I went through the trouble of writing this, you might as well add
> it to patch 1/5 commit message so it benefits the posterity.
>

Yes, I have also added most of this to the kernel documentation
for the link status property. But I will also add this to Patch 1/5 commit message.
Should we explain the alternative approaches as well over there?

Manasi

 
> > Acked-by: Tony Cheng <tony.cheng at amd.com>
> > Acked-by: Harry Wentland <Harry.wentland at amd.com>
> 
> These must go to patch 1/5 commit message.
> 
> BR,
> Jani.
> 
> 
> 
> >
> > Manasi Navare (5):
> >   drm: Add a new connector property for link status
> >   drm: Set DRM connector link status property
> >   drm/i915: Update CRTC state if connector link status property changed
> >   drm/i915: Find fallback link rate/lane count
> >   drm/i915: Implement Link Rate fallback on Link training failure
> >
> >  drivers/gpu/drm/drm_atomic_helper.c           |   7 ++
> >  drivers/gpu/drm/drm_connector.c               |  55 ++++++++++
> >  drivers/gpu/drm/i915/intel_ddi.c              |  21 +++-
> >  drivers/gpu/drm/i915/intel_dp.c               | 144 +++++++++++++++++++++++++-
> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
> >  drivers/gpu/drm/i915/intel_drv.h              |  10 +-
> >  include/drm/drm_connector.h                   |   9 +-
> >  include/drm/drm_crtc.h                        |   5 +
> >  include/uapi/drm/drm_mode.h                   |   4 +
> >  9 files changed, 257 insertions(+), 10 deletions(-)
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center


More information about the dri-devel mailing list