[PATCH 1/2] drm: Fix dirtyfb stalls
Daniel Vetter
daniel at ffwll.ch
Wed May 12 10:35:34 UTC 2021
On Wed, May 12, 2021 at 11:46 AM Pekka Paalanen <ppaalanen at gmail.com> wrote:
>
> On Wed, 12 May 2021 10:44:29 +0200
> Daniel Vetter <daniel at ffwll.ch> wrote:
>
> > On Wed, May 12, 2021 at 11:23:30AM +0300, Pekka Paalanen wrote:
> > > On Tue, 11 May 2021 18:44:17 +0200
> > > Daniel Vetter <daniel at ffwll.ch> wrote:
> > >
> > > > On Mon, May 10, 2021 at 12:06:05PM -0700, Rob Clark wrote:
> > > > > On Mon, May 10, 2021 at 10:44 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> > > > > >
> > > > > > On Mon, May 10, 2021 at 6:51 PM Rob Clark <robdclark at gmail.com> wrote:
> > > > > > >
> > > > > > > On Mon, May 10, 2021 at 9:14 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> > > > > > > >
> > > > > > > > On Sat, May 08, 2021 at 12:56:38PM -0700, Rob Clark wrote:
> > > > > > > > > From: Rob Clark <robdclark at chromium.org>
> > > > > > > > >
> > > > > > > > > drm_atomic_helper_dirtyfb() will end up stalling for vblank on "video
> > > > > > > > > mode" type displays, which is pointless and unnecessary. Add an
> > > > > > > > > optional helper vfunc to determine if a plane is attached to a CRTC
> > > > > > > > > that actually needs dirtyfb, and skip over them.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Rob Clark <robdclark at chromium.org>
> > > > > > > >
> > > > > > > > So this is a bit annoying because the idea of all these "remap legacy uapi
> > > > > > > > to atomic constructs" helpers is that they shouldn't need/use anything
> > > > > > > > beyond what userspace also has available. So adding hacks for them feels
> > > > > > > > really bad.
> > > > > > >
> > > > > > > I suppose the root problem is that userspace doesn't know if dirtyfb
> > > > > > > (or similar) is actually required or is a no-op.
> > > > > > >
> > > > > > > But it is perhaps less of a problem because this essentially boils
> > > > > > > down to "x11 vs wayland", and it seems like wayland compositors for
> > > > > > > non-vsync'd rendering just pageflips and throws away extra frames from
> > > > > > > the app?
> > > > > >
> > > > > > Yeah it's about not adequately batching up rendering and syncing with
> > > > > > hw. bare metal x11 is just especially stupid about it :-)
> > > > > >
> > > > > > > > Also I feel like it's not entirely the right thing to do here either.
> > > > > > > > We've had this problem already on the fbcon emulation side (which also
> > > > > > > > shouldn't be able to peek behind the atomic kms uapi curtain), and the fix
> > > > > > > > there was to have a worker which batches up all the updates and avoids any
> > > > > > > > stalls in bad places.
> > > > > > >
> > > > > > > I'm not too worried about fbcon not being able to render faster than
> > > > > > > vblank. OTOH it is a pretty big problem for x11
> > > > > >
> > > > > > That's why we'd let the worker get ahead at most one dirtyfb. We do
> > > > > > the same with fbcon, which trivially can get ahead of vblank otherwise
> > > > > > (if sometimes flushes each character, so you have to pile them up into
> > > > > > a single update if that's still pending).
> > > > > >
> > > > > > > > Since this is for frontbuffer rendering userspace only we can probably get
> > > > > > > > away with assuming there's only a single fb, so the implementation becomes
> > > > > > > > pretty simple:
> > > > > > > >
> > > > > > > > - 1 worker, and we keep track of a single pending fb
> > > > > > > > - if there's already a dirty fb pending on a different fb, we stall for
> > > > > > > > the worker to start processing that one already (i.e. the fb we track is
> > > > > > > > reset to NULL)
> > > > > > > > - if it's pending on the same fb we just toss away all the updates and go
> > > > > > > > with a full update, since merging the clip rects is too much work :-) I
> > > > > > > > think there's helpers so you could be slightly more clever and just have
> > > > > > > > an overall bounding box
> > > > > > >
> > > > > > > This doesn't really fix the problem, you still end up delaying sending
> > > > > > > the next back-buffer to mesa
> > > > > >
> > > > > > With this the dirtyfb would never block. Also glorious frontbuffer
> > > > > > tracking corruption is possible, but that's not the kernel's problem.
> > > > > > So how would anything get held up in userspace.
> > > > >
> > > > > the part about stalling if a dirtyfb is pending was what I was worried
> > > > > about.. but I suppose you meant the worker stalling, rather than
> > > > > userspace stalling (where I had interpreted it the other way around).
> > > > > As soon as userspace needs to stall, you're losing again.
> > > >
> > > > Nah, I did mean userspace stalling, so we can't pile up unlimited amounts
> > > > of dirtyfb request in the kernel.
> > > >
> > > > But also I never expect userspace that uses dirtyfb to actually hit this
> > > > stall point (otherwise we'd need to look at this again). It would really
> > > > be only there as defense against abuse.
> > > >
> > > > > > > But we could re-work drm_framebuffer_funcs::dirty to operate on a
> > > > > > > per-crtc basis and hoist the loop and check if dirtyfb is needed out
> > > > > > > of drm_atomic_helper_dirtyfb()
> > > > > >
> > > > > > That's still using information that userspace doesn't have, which is a
> > > > > > bit irky. We might as well go with your thing here then.
> > > > >
> > > > > arguably, this is something we should expose to userspace.. for DSI
> > > > > command-mode panels, you probably want to make a different decision
> > > > > with regard to how many buffers in your flip-chain..
> > > > >
> > > > > Possibly we should add/remove the fb_damage_clips property depending
> > > > > on the display type (ie. video/pull vs cmd/push mode)?
> > > >
> > > > I'm not sure whether atomic actually needs this exposed:
> > > > - clients will do full flips for every frame anyway, I've not heard of
> > > > anyone seriously doing frontbuffer rendering.
> > >
> > > That may or may not be changing, depending on whether the DRM drivers
> > > will actually support tearing flips. There has been a huge amount of
> > > debate for needing tearing for Wayland [1], and while I haven't really
> > > joined that discussion, using front-buffer rendering (blits) to work
> > > around the driver inability to flip-tear might be something some people
> > > will want.
> >
> > Uh pls dont, dirtyfb does a full atomic commit on atomic drivers
> > underneath it.
>
> You keep saying dirtyfb, but I still didn't understand if you mean
> literally *only* the legacy DirtyFB ioctl, or does it include
> FB_DAMAGE_CLIPS in atomic too?
>
> I suppose you mean only the legacy ioctl.
Only the legacy DIRTYFB ioctl. FB_DAMAGE_CLIPS is all solid I think.
> > > Personally, what I do agree with is that "tear if late from intended
> > > vblank" is a feature that will be needed when VRR cannot be used.
> > > However, I would also argue that multiple tearing updates per refresh
> > > cycle is not a good idea, and I know people disagree with this because
> > > practically all relevant games are using a naive main loop that makes
> > > multi-tearing necessary for good input response.
> > >
> > > I'm not quite sure where this leaves the KMS UAPI usage patterns. Maybe
> > > this matters, maybe not?
> > >
> > > Does it make a difference between using legacy DirtyFB vs. atomic
> > > FB_DAMAGE_CLIPS property?
> > >
> > > Also mind that Wayland compositors would be dynamically switching
> > > between "normal flips" and "tearing updates" depending on the
> > > scenegraph. This switch should not be considered a "mode set".
> > >
> > > [1] https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/65
> >
> > I think what you want is two things:
> > - some indication that frontbuffer rendering "works", for some value of
> > that (which should probably be "doesn't require dirtyfb")
> >
> > - tearing flips support. This needs driver support
>
> A "tear if late" functionality in the kernel would be really nice too,
> but can probably be worked around with high resolution timers in
> userspace and just-in-time atomic tearing flips. Although those flips
> would need to be tearing always, because timers that close to vblank are
> going to race with vblank.
>
> > If you don't have either, pls don't try to emulate something using
> > frontbuffer rendering and dirtyfb, because that will make it really,
> > really awkward for the kernel to know what exactly userspace wants to do.
> > Overloading existing interfaces with new meaning just we can really
> > and it happens to work on the one platform we tested is really not a good
> > idea.
>
> Alright, I'll spread the word if I catch people trying that.
>
> I didn't even understand that using DirtyFB at all would put "new
> meaning" to it. I mean, if you do front-buffer rendering, you must use
> DirtyFB or FB_DAMAGE_CLIPS on atomic to make sure it actually goes
> anywhere, right?
TBH I'd do FB_DAMAGE_CLIPS with atomic ioctl and the same fb. Also
maybe userspace wants to better understand what exactly happens for
frontbuffer tracking in this case too.
The issue with DIRTYFB ioctl like with all the legacy ioctls is that
it's very undefined how nonblocking and how async/tearing they are,
and there's no completion event userspace could use to properly stall
when it gets ahead too much. Any additional use we pile on top of them
just makes this even more awkward for the kernel to do in a way that
doesn't upset some userspace somewhere, while still trying to be as
consistent across drivers as possible (ideally using one code path to
remap to an atomic op in the same way for all drivers).
Properly definied atomic properties and the exact semantics userspace
expects is imo much better than "hey calling this ioctl gets the job
done on my driver, let's just use that". If there's something missing
in the atomic kms uapi, we need to add it properly.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list