[PATCH 2/2] protocol: Support scaled outputs and surfaces

Pekka Paalanen ppaalanen at gmail.com
Fri May 24 03:15:37 PDT 2013


On Thu, 23 May 2013 14:51:16 -0400 (EDT)
Alexander Larsson <alexl at redhat.com> wrote:

> > What if a client sets scale=0?
> 
> I guess we should forbid that, as it risks things dividing by zero.
> 
> > Maybe the scale should also be signed here? I think all sizes are
> > signed, too, even though a negative size does not make sense. We seem
> > to have a convention, that numbers you compute with are signed, and
> > enums and flags and bitfields and handles and such are unsigned. And
> > timestamps, since there we need the overflow behaviour. I
> > believe it's due to the C promotion or implicit cast rules more than
> > anything else.
> 
> Yeah, we should change it to signed.
> 
> > > @@ -1548,6 +1596,8 @@
> > >  	     summary="indicates this is the current mode"/>
> > >        <entry name="preferred" value="0x2"
> > >  	     summary="indicates this is the preferred mode"/>
> > > +      <entry name="scaled" value="0x4"
> > > +	     summary="indicates that this is a scaled mode"/>
> > 
> > What do we need the "scaled" flag for? And what does this flag mean?
> > How is it used? I mean, can we get duplicate native modes that differ
> > only by the scaled flag?
> > 
> > Unfortunately I didn't get to answer that thread before, but I had some
> > disagreement or not understanding there.
> 
> Yeah, this is the area of the scaling stuff that is least baked. 
> 
> Right now what happens is that the modes get listed at the scaled resolution
> (i.e. divided by to two, etc), and such scaled mode gets reported with a bit
> set so client can tell they are not native size. However, this doesn't seem
> quite right for a few reasons:
> 
> * We don't report rotated/flipped modes, nor do we swap the width/height for
>   these so this is inconsistent
> * The clients can tell what the scale is anyway, so what use is it?
> 
> However, listing the unscaled resolution for the modes is also somewhat
> problematic. For instance, if we listed the raw modes and an app wanted
> to go fullscreen in a mode it would need to create a surface of the scaled
> width/heigh (with the right scale), as otherwise the buffer size would not
> match the scanout size. 
> 
> For instance, if the output scale is 2 and there is a 800x600 native mode
> then the app should use a 400x300 surface with a 800x600 buffer and a 
> buffer_scale of 2.
> 
> Hmmm, I guess if the app used a 800x600 surface with buffer scale 1 we could still
> scan out from it. Although we'd have to be very careful about how we treat
> input and pointer position then, as its not quite the same.
> 
> I'll have a look at changing this.

I agree with all that. There are some more considerations. One is
the wl_shell_surface.geometry event. If you look at the specification
of wl_shell_surface.set_fullscreen, it requires the compositor to reply
with a geometry event with the dimensions to make the surface
fullscreen in the current native video mode. Since that is in pels
like John pointed out, it would carry 400x300 for a 800x600 mode, if
output_scale=2. I haven't read enough of the patches to see how you
handled that.

An old application not knowing about buffer_scale would simply use
400x300, and get scaled up. All good. Might even be scanned out
directly, if an overlay allows hardware scaling.

An old application looking at the mode list could pick the 800x600
mode, and use that with the implicit scale 1. Because fullscreen state
specifies, that the compositor makes the surface fullscreen, and allows
e.g. scaling, we can as well just simply scan it out.

The difference between these two cases is the surface size in pels,
400x300 in the former, and 800x600 in the latter. No problem for the
client. In the server we indeed need to make sure the input coordinates
are right.

An issue I see here is that the 800x600 buffer_scale=1 fullscreen setup
will have the very problem the whole output scale is trying to solve:
the application will draw its GUI in single-density, and it ends up 1:1
on a double-density screen, unreadable. However, if the application is
using the mode list to begin with, it probably has a way for the user
to pick a mode. So with a magnifying glass, the user can fix the
situation in the application settings. Cue in Weston desktop zoom...

Now, should we require, that applications that have a video mode menu,
will also have an entry called "default" or "native", which will come
from the geometry event?

If not, do we need to make sure there are output modes listed that match
exactly mode*output_scale == default native mode, and fake a new mode as
needed? Do we need that for all native modes, in case the default mode
changes?

Or maybe we don't need any of that, if we assume the user can configure
the application to use an arbitrary "mode"?

I'm thinking about an application (game), that renders in pixel units,
and only offers the user a choice between the server reported output
video modes. Is that an important use case?

If the application is output_scale/buffer_scale aware, it knows how to
do the right thing, even if it chooses a mode from the output mode
list. I guess one question is, how would e.g. SDL2 expose that to game
writers, but OTOH I think no amount of creative lying from Wayland is
going to magically fix it for SDL2 apps, if SDL2 does not explicitly
expose this feature to begin with. SDL2 could modify the mode list
internally, too.


Thanks,
pq


More information about the wayland-devel mailing list