[PATCH] protocol: Add buffer_scale to wl_surface and wl_output

Alexander Larsson alexl at redhat.com
Wed May 15 06:11:33 PDT 2013


On ons, 2013-05-15 at 11:13 +0300, Pekka Paalanen wrote:
> On Tue, 14 May 2013 12:26:48 +0200
> alexl at redhat.com wrote:

Lots of good stuff snipped. I'll try to fix things up based on that.
Some responses below.

> > +      </description>
> > +      <arg name="scale" type="fixed"/>
> 
> Are you sure you really want fixed as the type?
> Integer scaling factors sounded a lot more straightforward. When we are
> dealing with pixel buffers, integers make sense.
> 
> Also, I do not buy the argument, that integer scaling factors are not
> finely grained enough. If an output device (monitor) has such a hidpi,
> and a user wants the default scaling, then we will simply have an
> integer scaling factor >1, for example 2. Clients will correspondingly
> somehow see, that the output resolution is "small", so they will adapt,
> and the final window size will not be doubled all the way unless it
> actually fits the output. This happens by the client choosing to draw a
> smaller window to begin with, not by scaling, when compared to what it
> would do if the default scaling factor was 1. Fractional scaling factors
> are simply not needed here, in my opinion.

I agree that fixed is a poor choice here. The alternative is to always
use an int scaling factor, or allow the client to separately specify the
surface size and the buffer size. Both of these guarantee that both
buffer and surface are integers, which I agree with you that they have
to be. Of course, the later means that the actual scaling factor differs
slightly from window to window for fractional scaling due to rounding.

Having started a bit on the implementation in gtk+ and weston it seems
that allowing fractional scales increases the implementation complexity
quite a bit. For instance, having widgets end on non-whole-integer
positions makes clipping and dirty region tracking harder. Another
example is that damage regions on a buffer need not correspond to a
integer region in global coordinates (or vice versa if we define damage
to be in surface coordinates).

On the other hand, It seems that a few OSX users seem to want to use
fractional scaling (in particular the 1.5 scaling from 2880x1800 to
1920x1200 seems very popular even if its not as nice looking as the 2x
one), so there seems to be a demand for it.

I'm more and more likeing the way OSX solves this, i.e. only allow and
expose integer scaling factors in the APIs, but then do fractional
downscaling in the compositor (i.e. say the output is 1920x1200 in
global coords with a scaling factor of two, but actually render this by
scaling the user-supplied buffer by 0.75). It keeps the implementation
and APIs very simple, it does the right thing for the "nice" case of 2x
scaling and it allows the fractional scaling.

> Can we have any use for scales less than one?

I don't think so.

> Also, one issue raised was that if an output has a scaling factor A,
> and a buffer has a scaling factor B, then final scaling factor is
> rational. To me that is a non-issue. It can only occur for a
> misbehaving client, in which case it gets what it deserves, or in a
> multi-output case of one surface spanning several non-identical
> monitors. I think the latter case it not worth caring about.
> Non-identical monitors are not identical, and you get what you happen
> to get when you use a single buffer to composite to both.

Yeah, i don't think this is really a practical problem. It'll look
somewhat fuzzy in some construed cases.


> The important thing is to make all client-visible coordinate systems
> consistent and logical.

Yeah, i'll try to use these names in the docs and be more clear which
coordinate spaces different requests/events work in.

> And now the questions.
> 
> If an output has a scaling factor f, what does the wl_output
> report as the output's current and supported video modes?

I believe it should report the resolution in global coordinates.
Although we should maybe extend the mode with scale information. This is
the right thing to do in terms of "backwards compatibility", but it is
also useful for e.g. implementing the fractional scaling. So, a
2880x1800 panel would report that there exists a 1920x1200 at 2x mode,
which wouldn't be possible if we had to report the size in output
coordinates.

It also seems right from a user perspective. The list of resolutions
would be "2880x1800, 1920x1200, 1400x900", which is to a first degree
what users will experience with these modes. Furthermore, this will
allow us to expose "fake" modes for lower resolutions that maybe some
LCD panels don't support (or do with a bad looking scaling), which some
games may want.

> What about x,y in the wl_output.geometry event (which I think are just a
> global coordinate space leak that should not be there)?

Yeah, this is in global coords, and seems like a leak.

> The video modes are important because of the
> wl_shell_surface.set_fullscreen with method DRIVER. A fullscreen
> surface with method DRIVER implies, that the client wants the
> compositor to change the video mode to match this surface. Of course
> this usually only happens when the fullscreen surface is topmost and
> active.
> 
> A client can use the list of supported video modes from the wl_output
> to choose the size of its fullscreen surface. E.g. if video mode
> 800x600 is supported, the client may choose to make the fullscreened
> surface of size 800x600, and assume that the server ideally switches
> the video mode 800x600 to present the surface.
> 
> How do buffer_scale and output scale interact with this mechanism?

I think the modes reported should say whether the mode is native or
scaled, so the app can chose whether to use it or not. However, I think
its nice that we can support 800x600 in a scaled version even if the lcd
panel can't drive this natively (or do so poorly). This means you can
run games designed for 800x600 even on that hardware.





More information about the wayland-devel mailing list