[PATCH 2/2] protocol: Add buffer_scale to wl_surface and wl_output

Bill Spitzak spitzak at gmail.com
Mon May 13 19:02:34 PDT 2013


Alexander Larsson wrote:

> On ons, 2013-05-08 at 12:07 -0700, Bill Spitzak wrote:
>> Output scaling should be merged with the existing "buffer_transform" 
>> (the 8-value enumeration describing 90 degree rotations/reflections).
> 
> In effect they are parts of the same thing, yeah. I don't know if ABI
> compat allows us to just change buffer_transform though.

I'm not proposing changing the buffer_transform api, but instead somehow 
making sure that all transforms are "together" whenever possible so that 
they can be described as a single transform.

I think there has to be an attempt to reduce the number of 
transformation steps to as few as possible. The transforms themselves 
can be as complex of a combination of different apis set by the client 
and compositor as wanted, but every other api that takes an xy location 
should use xy locations in as few different spaces as possible. If this 
is not done clients are forced to maintain a messy square array of 
matrix and inverse matricies to translate positions from one space into 
another.

With the current proposal the only problem I see is that the scaler 
specifies the source rectangle in the output of buffer_transform space. 
If more complex transforms are added in the future the meaning of this 
source rectangle either becomes rather complex or pretty useless, or you 
are forced to think about two transforms rather than one. My 
recommendation is that the source rectangle be in the input to 
buffer_transform space.

I'm also worried some about the proposals for hi-res outputs because 
they look like they may force a third transform, as events are in the 
input space but some compositor api such as the xy are in the output space.

I think things can be reduced to exactly two transforms. There is one 
transform from "buffer space" which are where pixels are actually stored 
in the buffer, to "surface space" which is where events, surface size, 
and regions are defined. There is a second transform to "output space" 
which are pixels on an output (there is a different one of these 
transforms per output).

The first transform currently consists of the buffer_transform and the 
scaler. The second one consists of the proposed hi-dpi compensation, the 
xy translation of outputs, the second transform of any parents for 
subsurfaces, and any effects like rotation or wavy that the compositor adds.

I think there is a possible scheme where there is a single transform, by 
making buffer space and surface space identical. This is unfortunately 
incompatible with the long-established buffer_transform api, as well as 
the scaler api, in that all events and surface size are specified in the 
input to these rather than the output.

>> I think in the end Wayland is going to have to have arbitrary affine 
>> transforms in the api, and it might make sense to decide a standard for 
>> these now, so that they are not split up amoung several apis like what 
>> is happening now. Apparently there is worry about using floating point 
>> in the api, but I think the following proposal that only uses integers 
>> or fixed point works:
>>
>> Translation are given as 2 24.8 fixed-point numbers.
>>
>> Scale is 4 unsigned numbers: w,h,W,H. You can think of these as the size 
>> of the source and destination, or w/W and h/H are the scaling factors.
>>
>> Rotation and skew are 4 integers: x0,y0,x1,y1. The X and Y axis are 
>> rotated to point at these positions. To avoid skew use x1==-y0 and 
>> y1==x0. Flipping the signs can be used to make reflections.
>>
>> This setup allows some useful numbers with somewhat intuitive results, 
>> and avoids trying to specify irrational numbers using integers.
> 
> I'm not sure this generality is useful for this? When would you ever
> store the pixel data for your window pre-skewed? I mean, I don't think
> in practice its ever gonna useful to have windows displayed rotated and
> skewed on the screen, but I can see the conceptual nicety in allowing
> this (i.e. wobbly windows with input working). But I don't *ever* think
> an app is gonna supply the surface data for the window in such a way.

I think it may be useful for clients to compensate for odd transforms 
that a compositor applies to thumbnails, etc. They could produce a much 
higher-fidelity image by drawing the transformed image directly. The 
compositor would tell the client "I am applying this transform to this 
surface" and the client could re-render the surface using that transform 
and set the inverse as the transform. The compositor would then multiply 
these to get an identity and know that it does not have to apply any 
transform to the surface. In fact this is exactly how the 
buffer_transform works today. (ps: the compositor would only report the 
fractional part of xy translate, thus preserving the "clients don't know 
where the windows are" design feature).

It won't help for wobbly windows unless "transform" is much more 
complicated that just the 6-number affine I propose. But at least being 
able to describe a transform as a single object means that it may be 
possible to enhance it this way in the future.



More information about the wayland-devel mailing list