[RFC v2] Wayland presentation extension (video protocol)

Pekka Paalanen ppaalanen at gmail.com
Tue Feb 11 00:06:43 PST 2014


On Mon, 10 Feb 2014 09:23:12 -0600
Jason Ekstrand <jason at jlekstrand.net> wrote:

> On Mon, Feb 10, 2014 at 3:53 AM, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> 
> > On Sat, 8 Feb 2014 15:23:29 -0600
> > Jason Ekstrand <jason at jlekstrand.net> wrote:
> >
> > > Pekka,
> > > First off, I think you've done a great job over-all.  I think it will
> > both
> > > cover most cases and work well  I've got a few comments below.
> >
> > Thank you for the review. :-)
> > Replies below.
> >
> > > On Thu, Jan 30, 2014 at 9:35 AM, Pekka Paalanen <ppaalanen at gmail.com>
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > it's time for a take two on the Wayland presentation extension.
> > > >
> > > >
> > > >                 1. Introduction
> > > >
> > > > The v1 proposal is here:
> > > >
> > > >
> > http://lists.freedesktop.org/archives/wayland-devel/2013-October/011496.html
> > > >
> > > > In v2 the basic idea is the same: you can queue frames with a
> > > > target presentation time, and you can get accurate presentation
> > > > feedback. All the details are new, though. The re-design started
> > > > from the wish to handle resizing better, preferably without
> > > > clearing the buffer queue.

...

> > > My one latent concern is that I still don't think we're entirely handling
> > > the case that QtQuick wants.  What they want is to do their rendering a
> > few
> > > frames in advance in case of CPU/GPU jitter.  Technically, this extension
> > > handles this by the client simply doing a good job of guessing
> > presentation
> > > times on a one-per-frame baseis.  However, it doesn't allow for any
> > damage
> > > tracking.  In the case of QtQuick they want a linear queue of buffers
> > where
> > > no buffer ever gets skipped.  In this case, you could do damage tracking
> > by
> > > allowing it to accumulate from one frame to another and you get all of
> > the
> > > damage-tracking advantages that you had before.  I'm not sure how much
> > this
> > > matters, but it might be worth thinking about it.
> >
> > Does it really want to display *every* frame regardless of time? It
> > doesn't matter that if a deadline is missed, the animation slows down
> > rather than jumps to keep up with intended velocity?
> >
> 
> That is my understanding of how it works now.  I  *think* they figure the
> compositor isn't the bottle-kneck and that it will git its 60 FPS.  That
> said, I don't actually work on QtQuick.  I'm just trying to make sure they
> don't get completely left out in the cold.
> 
> 
> >
> > Axel has a good point, cannot this be just done client side and
> > immediate updates based on frame callbacks?
> >
> 
> Probably not.  They're using GLES and EGL  so they can't draw early and
> just stash the buffer.

Oh yeah, I just realized that. I hope Axel's suggestion works out, or
that they actually want the timestamp queue semantics rather than
present-every-frame-really-no-skipping. But if they want the
every-frame semantics, then I think that needs to be another extension.
And then the interactions with immediate commits and timestamp queued
updates gets fun.

> > If there is a problem in using frame callbacks for that, that is more
> > likely a problem in the compositor's frame scheduling than the protocol.
> >
> > The problem with damage tracking, why I did not take damage as queued
> > state, is that it is given in surface coordinates. This will become a
> > problem during resizes, where the surface size changes, and wl_viewport
> > is used to decouple the content from the surface space.
> >
> 
> The separation makes sense.
> 
> 
> >
> > If we queue damage, we basically need to queue also surface resizes.
> > Without wl_viewport this is what happens automatically, as surface size
> > is taken from the buffer size.
> >
> > However, in the proposed design, the purpose of wl_viewport is to
> > decouple the surface size from buffer size, so that they can change
> > independently. The use case is live video: if you resize the window,
> > you don't want to redo the video frames, because that would likely
> > cause a glitch. Also if the video resolution changes on the fly, by e.g.
> > stream quality control, you don't need to do anything extra to keep the
> > window at the old size. Damage is a property of the content update,
> > yes, but we have it defined in surface coordinates, so when surface and
> > buffer sizes change asynchronously, the damage region would be
> > incorrect.
> >
> > The downside is indeed that we lose damage information for queued
> > buffers. This is a deliberate design choice, since the extension was
> > designed primarily for video where usually the whole surface gets
> > damaged.
> >
> 
> Yeah, I think you made the right call on this one.  Queueing buffers in a
> completely serial fassion really does seem to be a special case.  Trying to
> do damage tracking for an arbitrary queue would very quickly get insane.
> Plus all the other problems you mentioned.
> 
> 
> >
> > But, I guess we could add another request, presentation.damage, to give
> > the damage region in buffer coordinates. Would it be worth it?

Well, I don't think the damage tracking would get particularly nasty.
You queue damage with the buffers, apply the damage (and convert to
surface space) when you apply a buffer, and if you discard a buffer,
you merge the damage it had to the next one. So with a linear queue it
should be fine.

I recall for pending/cached/current damage we already do the proper
damage tracking even when buffers get replaced before presenting them,
and there we also take care of dx,dy from attach.

Thankfully presentation.damage request would be a strictly additional
change to the protocol with little to none interaction effects. Or so it
seems to me at the moment. Should be a safe add it later if needed.


Thanks,
pq


More information about the wayland-devel mailing list