[PATCH] present: Queue flips for later execution. Begging for review.

Pekka Paalanen ppaalanen at gmail.com
Wed Jun 4 00:03:14 PDT 2014


On Tue, 03 Jun 2014 20:08:21 -0700
Keith Packard <keithp at keithp.com> wrote:

> Michel Dänzer <michel at daenzer.net> writes:
> 
> > At least the waiting for the pixmap to become idle part should be
> > perfectly possible in the X server?
> 
> One of three possible ways:
> 
>  1) Blocking kernel call waiting for buffer idle.
>     This doesn't seem like what we want.
> 
>  2) Receive a DRM event when a buffer is idle
>     Does this event even exist today?
>    
>  3) Polling for idle when receiving a vblank event.
>     This will work fine for vblank-synchronized flips; we
>     simply check whether the next queued buffer is idle and delay the
>     flip by a frame if it isn't.
> 
> > For flip elision with non-async flips, something like
> > DRM_MODE_PAGE_FLIP_REPLACE (and possibly a corresponding DRM event
> > signaling the previous flip was canceled, if DRM_EVENT_FLIP_COMPLETE is
> > inappropriate for that) might work, which would replace any pending flip
> > with the new one.
> 
> Do we just want to send the MSC count to the kernel so it "knows" which
> frame we want the contents presented at?

Hi,

that is starting to sound a lot like queueing display updates in the
kernel. Let me jump a few (couple?) years to the future and speculate
wildly, likely going off-topic. ;-)

I think the whole concept of MSC will be breaking down, as we are
getting dynamic/variable refresh displays (G-SYNC, FreeSync). MSC will
no longer correspond to the true real time at all (bye bye A/V sync),
not even for the very next vblank which might happen any time from 5 to
500 ms from the previous vblank. (That is one reason why I designed the
Wayland Presentation extension (still RFC [1]) around timestamps
instead of frame counters.) We'd need a way to tell DRM when we would
like the next vblank to happen.

DRM universal planes + atomic modesetting / nuclear pageflip is already
aiming to gather at least a per-head update into a single atomic set of
state changes. Once we have that, we might extend that to allow
queueing in the kernel for more than just the very next update. The how
is totally open, and the benefits are not clear at least to me yet. I
suspect you'd also need a way to cancel some or all queued updates - I
suppose you'd want that already - and to get feedback per submission on
was it actually presented and when.

This is just some food for thought, nothing more.

> > The client may still need to use a fourth buffer if it wants to start
> > rendering the next frame before the flip is complete and before the last
> > submitted pixmap becomes idle. I can't think of any way around that
> > offhand.
> 
> We can try it both ways and see what it looks like for real applications.

We've had similar discussions in the Wayland land, and generally people
were horrified of needing 4 buffers for a busy-loop EGL app (games).
I'm not sure if we came to any real conclusion. I think if someone
really wants to waste CPU/GPU cycles for drawing more than you can
show, they have also the memory to juggle more buffers around. *shrug*

If the aim is to reduce latency, I think it should be tackled in a
more... sophisticated manner than just throwing raw power at it. But
that'd mean the games would need to get smarter, which won't help
existing games.

> > So it might make sense to try the approach using async flips for now,
> > and see how well that works in practice.
> 
> Right, having "real" async flips is a prerequisite for doing this, along
> with sufficient mechanism to not request a flip until the buffer is
> idle, without forcing the X server to stall in the flip kernel call
> waiting.

I think dmabuf will be solving that problem. If I understood right, you
would be able to use the dmabuf fd to check if (DRM) fences have been
signalled. I'm not sure of the details, though, or when that feature
would be available. Maarten Lankhorst would know.


Thanks,
pq

[1] http://lists.freedesktop.org/archives/wayland-devel/2014-March/013580.html


More information about the xorg-devel mailing list