[PATCH 06/10] drm/vkms: flush crc workers earlier in commit flow

Daniel Vetter daniel at ffwll.ch
Thu Jun 13 07:55:40 UTC 2019


On Thu, Jun 13, 2019 at 09:53:55AM +0200, Daniel Vetter wrote:
> On Wed, Jun 12, 2019 at 10:42:42AM -0300, Rodrigo Siqueira wrote:
> > On Thu, Jun 6, 2019 at 7:28 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote:
> > >
> > > Currently we flush pending crc workers very late in the commit flow,
> > > when we destry all the old crtc states. Unfortunately at that point
> > 
> > destry -> destroy
> > 
> > > the framebuffers are already unpinned (and our vaddr possible gone),
> > > so this isn't good. Also, the plane_states we need might also already
> > > be cleaned up, since cleanup order of state structures isn't well
> > > defined.
> > >
> > > Fix this by waiting for all crc workers of the old state to complete
> > > before we start any of the cleanup work.
> > >
> > > Note that this is not yet race-free, because the hrtimer and crc
> > > worker look at the wrong state pointers, but that will be fixed in
> > > subsequent patches.
> > >
> > > Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
> > > Cc: Rodrigo Siqueira <rodrigosiqueiramelo at gmail.com>
> > > Cc: Haneen Mohammed <hamohammed.sa at gmail.com>
> > > Cc: Daniel Vetter <daniel at ffwll.ch>
> > > ---
> > >  drivers/gpu/drm/vkms/vkms_crtc.c |  2 +-
> > >  drivers/gpu/drm/vkms/vkms_drv.c  | 10 ++++++++++
> > >  2 files changed, 11 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
> > > index 55b16d545fe7..b6987d90805f 100644
> > > --- a/drivers/gpu/drm/vkms/vkms_crtc.c
> > > +++ b/drivers/gpu/drm/vkms/vkms_crtc.c
> > > @@ -125,7 +125,7 @@ static void vkms_atomic_crtc_destroy_state(struct drm_crtc *crtc,
> > >         __drm_atomic_helper_crtc_destroy_state(state);
> > >
> > >         if (vkms_state) {
> > > -               flush_work(&vkms_state->crc_work);
> > > +               WARN_ON(work_pending(&vkms_state->crc_work));
> > >                 kfree(vkms_state);
> > >         }
> > >  }
> > > diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
> > > index f677ab1d0094..cc53ef88a331 100644
> > > --- a/drivers/gpu/drm/vkms/vkms_drv.c
> > > +++ b/drivers/gpu/drm/vkms/vkms_drv.c
> > > @@ -62,6 +62,9 @@ static void vkms_release(struct drm_device *dev)
> > >  static void vkms_atomic_commit_tail(struct drm_atomic_state *old_state)
> > >  {
> > >         struct drm_device *dev = old_state->dev;
> > > +       struct drm_crtc *crtc;
> > > +       struct drm_crtc_state *old_crtc_state;
> > > +       int i;
> > >
> > >         drm_atomic_helper_commit_modeset_disables(dev, old_state);
> > >
> > > @@ -75,6 +78,13 @@ static void vkms_atomic_commit_tail(struct drm_atomic_state *old_state)
> > >
> > >         drm_atomic_helper_wait_for_vblanks(dev, old_state);
> > >
> > > +       for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) {
> > > +               struct vkms_crtc_state *vkms_state =
> > > +                       to_vkms_crtc_state(old_crtc_state);
> > > +
> > > +               flush_work(&vkms_state->crc_work);
> > > +       }
> > > +
> > >         drm_atomic_helper_cleanup_planes(dev, old_state);
> > >  }
> > 
> > why not use drm_atomic_helper_commit_tail() here? I mean:
> > 
> > for_each_old_crtc_in_state(old_state, crtc, old_crtc_state, i) {
> > …
> > }
> > 
> > drm_atomic_helper_commit_tail(old_state);
> > 
> > After looking at drm_atomic_helper_cleanup_planes() it sounds safe for
> > me to use the above code; I just test it with two tests from
> > crc_cursor. Maybe I missed something, could you help me here?
> > 
> > Finally, IMHO, I think that Patch 05, 06 and 07 could be squashed in a
> > single patch to make it easier to understand the change.

Ah just realized that patch 07 is entirely unrelated to this work here.
Squashing that in would be a bad idea, we could merge patch 7
independently of this stuff here. So it should be a separate patch.
-Daniel

> 
> I wanted to highlight all the bits a bit more, because this is a lot more
> tricky than it looks. For correct ordering and avoiding races we can't do
> what you suggested. Only after
> 
> 	drm_atomic_helper_wait_for_vblanks()
> 
> do we know that all subsequent queue_work will be for the _new_ state.
> Only once that's done is flush_work() actually useful, before that we
> might flush the work, and then right after the hrtimer that simulates
> vblank queues it again. Every time you have a flush_work before cleaning
> up the work structure the folling sequence must be obeyed, or it can go
> wrong:
> 
> 1. Make sure no one else can requeue the work anymore (in our case that's
> done by a combination of first updating output->crc_state and then waiting
> for the vblank to pass to make sure the hrtimer has noticed that change).
> 
> 2. flush_work()
> 
> 3. Actually clean up stuff (which isn't done here).
> 
> Doing the flush_work before we even completed the output->state update,
> much less waited for the vblank to make sure that's happened, missed the
> point.
> -Daniel
> 
> > 
> > > --
> > > 2.20.1
> > >
> > 
> > 
> > -- 
> > 
> > Rodrigo Siqueira
> > https://siqueira.tech
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list