[PATCH weston 4/8] protocol: crop & scale RFC v3

Pekka Paalanen ppaalanen at gmail.com
Fri Nov 29 02:09:31 PST 2013


On Fri, 29 Nov 2013 01:20:21 -0800
Bill Spitzak <spitzak at gmail.com> wrote:

> Okay I think perhaps I am completely failing to comprehend what is
> going on.
> 
> The client I am thinking of is not trying to do "partial pixels".
> What I am thinking of is the most simple client you can imagine that
> knows what the output_scale is and decides it wants to render images
> at full resolution.
> 
> If for instance the output_scale is 3, then I think the client will
> set the buffer_scale to 3, and then the pixels in the client's buffer
> will map 1:1 to the pixels on the output device. There are no
> "partial pixels".
> 
> If this client wants a rectangle that is covered by an image produced
> by this scaling extension, it appears to me that it cannot be any
> integer number of buffer pixels. It can only be multiples of 3 pixels
> in both size and position.
> 
> Please correct me if I am misunderstanding this, but I have read it
> over and over and over and it sure looks like this is what the design
> does!

You have read it right.

However, the standard units of measure are the surface coordinates. We
assume that toolkits and applications set up their scenegraph in
surface coordinates, and that integers are sufficient there. The
application drawing engine just happens to be applying a scaling factor
equal to the desired buffer_scale. After all, toolkits should be
prepared to change the buffer_scale at will (window completely moves
from HiDPI monitor to a low DPI one), so storing the scenegraph in
buffer coordinates would be a lot of work for each change.

In other words, all GUI elements are defined in integer surface
coordinates. Defining GUI elements in smaller units is not useful,
because people will not be able to take advantage of smaller
things anyway: they cannot see it clearly, or they cannot poke it
(input) accurately enough.

You are not supposed to be able to position and size GUI elements in
the output resolution. That is exactly the purpose of the whole
HiDPI extension. Instead, you design your GUI in "pixels" as always,
and the window system turns it into a usable size on an output. If you
want to draw the GUI elements in more _detail_, you can, by using
buffer_scale.

That is to say for your example, you design your GUI in multiples of 3
output pixels in any case.

- pq

> On 11/27/2013 01:51 PM, Daniel Stone wrote:
> > Hi,
> >
> > On 27 November 2013 20:08, Bill Spitzak <spitzak at gmail.com> wrote:
> >> On 11/27/2013 12:34 AM, Pekka Paalanen wrote:
> >>> I have explained all this before. Nothing here has changed.
> >>
> >> I realize this but I still have to express my complete
> >> dumbfoundment that you think this is ok.
> >
> > You're attempting to design for the problem space where clients
> > create configurations which cannot be displayed except by
> > attempting to invent the concept of 'partial pixels', where a
> > buffer size of 79.3333... is not only meaningful but a design
> > goal.  The opposing position is 'don't do that': clients should
> > avoid getting themselves into these situations in the first place.
> >
> > Your proposals really come across as attempting to design for
> > situations which should never occur (and can't meaningfully be dealt
> > with by extant hardware), optimising for hugely misguided clients
> > in a fit of completism.  That's your view, which you've made very
> > clear, but I don't think it's shared by anyone else in these
> > threads.
> >
> > Cheers,
> > Daniel
> >
> 



More information about the wayland-devel mailing list