[PATCH v2] drm/tidss: dispc: Rewrite naive plane positioning code

Daniel Vetter daniel at ffwll.ch
Wed Feb 12 20:28:02 UTC 2020


On Wed, Feb 12, 2020 at 7:01 PM Jyri Sarha <jsarha at ti.com> wrote:
>
> On 12/02/2020 16:33, Ville Syrjälä wrote:
> > On Wed, Feb 12, 2020 at 04:08:11PM +0200, Jyri Sarha wrote:
> >> On 12/02/2020 15:59, Jyri Sarha wrote:
> >>> The old implementation of placing planes on the CRTC while configuring
> >>> the planes was naive and relied on the order in which the planes were
> >>> configured, enabled, and disabled. The situation where a plane's zpos
> >>> was changed on the fly was completely broken. The usual symptoms of
> >>> this problem was scrambled display and a flood of sync lost errors,
> >>> when a plane was active in two layers at the same time, or a missing
> >>> plane, in case when a layer was accidentally disabled.
> >>>
> >>> The rewrite takes a more straight forward approach when HW is
> >>> concerned. The plane positioning registers are in the CRTC (actually
> >>> OVR) register space and it is more natural to configure them in one go
> >>> while configuring the CRTC. To do this we need to make sure we have
> >>> all the planes on updated CRTCs in the new atomic state to be
> >>> committed. This is done by calling drm_atomic_add_affected_planes() in
> >>> crtc_atomic_check().
> >>>
> >>> Signed-off-by: Jyri Sarha <jsarha at ti.com>
> >>> ---
> >>>  drivers/gpu/drm/tidss/tidss_crtc.c  | 55 ++++++++++++++++++++++++++++-
> >>>  drivers/gpu/drm/tidss/tidss_dispc.c | 55 +++++++++++------------------
> >>>  drivers/gpu/drm/tidss/tidss_dispc.h |  5 +++
> >>>  3 files changed, 79 insertions(+), 36 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c
> >>> index 032c31ee2820..f7c5fd1094a8 100644
> >>> --- a/drivers/gpu/drm/tidss/tidss_crtc.c
> >>> +++ b/drivers/gpu/drm/tidss/tidss_crtc.c
> >> ...
> >>> @@ -108,7 +110,54 @@ static int tidss_crtc_atomic_check(struct drm_crtc *crtc,
> >>>             return -EINVAL;
> >>>     }
> >>>
> >>> -   return dispc_vp_bus_check(dispc, hw_videoport, state);
> >>> +   ret = dispc_vp_bus_check(dispc, hw_videoport, state);
> >>> +   if (ret)
> >>> +           return ret;
> >>> +
> >>> +   /* Add unchanged planes on this crtc to state for zpos update. */
> >>> +   return drm_atomic_add_affected_planes(state->state, crtc);
> >>
> >> Is this a correct way to use drm_atomic_add_affected_planes()?
> >>
> >> I saw that some other drivers implement their own mode_config
> >> atomic_check() and have this call there in
> >> for_each_new_crtc_in_state()-loop, but I thought it should be fine to
> >> call it in crtc_atomic_check().
> >
> > You seem to be using drm_atomic_helper_check_planes(), which means
> > crtc.atomic_check() gets called after plane.atomic_check(). So this
> > might be good or bad depending on whether you'd like the planes you
> > add here to go through their .atomic_check() or not.
> >
>
> Should have thought of that my self. Extra plane.atomic_check() calls do
> not do any actual harm, but they are potentially expensive. The planes
> are really only needed there in the commit phase (crtc_enable() or
> flush()). Well, I'll do my own mode_config.atomic_check() and call
> drm_atomic_add_affected_planes() in a for_each_new_crtc_in_state()-loop
> after all the checks.

Also, if you do use the helpers then this should already have happened
for you. Which makes me wonder why all this work, so maybe there's
some dependency between all the various check functions you have going
on? Might be time to roll your own top-level check function that calls
stuff in the order your hw needs things to be checked, so that you
don't add new states late and have to run one check phase twice (which
is totally fine with the helpers as long as all your check callbacks
are idempotent, but often just overkill and confusing).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list