[RFC v.2] Extend wl_surface protocol
Daniel Stone
daniel at fooishbar.org
Mon Nov 11 13:42:01 PST 2013
Hi,
On 11 November 2013 15:41, Pekka Paalanen <ppaalanen at gmail.com> wrote:
>> <request name="destroy" type="destructor">
>> <description summary="remove buffer_queue interface">
>> The buffer_queue interface is removed from the buffer_queue-enabled
>> surface.
>
> This could also mention, that the queue is emptied first, and release
> and presentation feedback events are emitted as usual. Can describe it
> as an implicit "clear" request.
Hm, I'd prefer an explicit clear/flush request, than suggesting people
repeatedly destroy and recreate queue objects.
>> Note that presentation time only tells client when the compositor presen-
>> ted the buffer to display hardware, not when the buffer was turned into
>> light (actually displayed on screen) by this hardware. As there could
>> be anything displaying those buffers, from very fast, low-latency
>> computer monitors to slow, hi-latency HDMI TV screens, it is the client's
>> responsibility to make sure it knows what display hardware is currently
>> connected and what is its latency.
>
> Do people agree with this definition?
>
> I assume it means when the gfx card starts to emit the pixels to the
> physical wire.
>
> Or should we allow the compositor to factor in the possible information
> from the monitor about its latency?
>
> I'm kind of thinking that we should. If a monitor does not tell its
> latency, and the user can see it, the compositor could allow the user
> to configure an assumed latency. Perhaps even measured by the user.
>
> Or is that something that should be (or even already is?) configured in
> the drivers, so they already give out the "turns into light" time?
>
> Also, the presentation timestamp can only be given wrt. one output. If
> the surface spans several outputs, the compositor chooses one to sync
> to.
Actually, I think the definition there is good, if perhaps a little
unclear. The compositor should make its best effort to determine
exactly when the buffer hits the screen, but that's it. If we specify
that the time is kernel submission rather than display, then that
sucks if we know when it was actually displayed. OTOH, if we specify
it's exactly when it's displayed, systems which can't work that out
will technically be non-conformant.
For cases like HDMI, the CEA chunk of the EDID does actually allow it
to specify an optional latency, so that could be good to use. (And
hope to hell whatever's doing your HDMI audio takes note of that too.)
Either way, if the user's observing additional latency, they'll have
to tweak things, but I'd rather that offset be applied in the media
player (all of which already have tweaks for this) than attempting to
shove it in the protocol. (Having it global would ruin things for
everyone, so it'd have to be per-object, in which case, eh, just do it
yourself ...)
Of course, there's nothing precluding the compositor itself from being
able to configure a constant latency, for people shipping devices with
a known predictable offset.
Cheers,
Daniel
More information about the wayland-devel
mailing list