[PATCH 2/2] protocol: Support scaled outputs and surfaces
Alexander Larsson
alexl at redhat.com
Wed May 22 01:12:00 PDT 2013
On tis, 2013-05-21 at 20:57 +0300, Pekka Paalanen wrote:
> On Tue, 21 May 2013 08:35:53 -0700
> Bill Spitzak <spitzak at gmail.com> wrote:
> > This proposal does not actually restrict widget positions or line sizes,
> > since they are drawn by the client at buffer resolution. Although
>
> No, but I expect the toolkits may.
Gtk very much will do this at least.
> > annoying, the outside buffer size is not that limiting. The client can
> > just place a few transparent pixels along the edge to make it look like
> > it is any size.
> >
> > However it does restrict the positions of widgets that use subsurfaces.
> >
> > I see this as a serious problem and I'm not sure why you don't think it
> > is. It is an arbitrary artificial limit in the api that has nothing to
> > do with any hardware limits.
>
> It is a design decision with the least negative impact, and it is
> not serious. Sub-surfaces will not be that common, and they
> certainly will not be used for common widgets like buttons.
Yeah, this is a simple solution to an actual real-life problem that is
easy to implement (I've got weston and gtk+ mostly working). If you want
to do something very complicated then just don't use scaling and draw
however you want. We don't want to overcomplicate the normal case with
fractional complexity and extra coordinate spaces.
> > The reason you want to position widgets at finer positions is so they
> > can be positioned evenly, and so they can be moved smoothly, and so they
> > can be perfectly aligned with hi-resolution graphics.
>
> But why? You have a real, compelling use case? Otherwise it just
> complicates things.
Exactly.
> A what? No way, buffer_scale is private to a surface, and does not
> affect any other surface, not even sub-surfaces. It is not
> inherited, that would be insane.
Yes, buffer_transform and buffer_scale only define how you map the
client supplied buffer pixels into the surface coordinates. It does not
really change the size of the surface or affect subsurfaces. (Well,
technically it does since we don't separatately specify the surface size
but derive it from the buffer and its transform, but after that the
surface is what it is in an abstract space).
> There is no "compositor coordinate space" in the protocol. There
> are only surface coordinates, and now to a small extent we are
> getting buffer coordinates.
Very small extent. I think the only place in the protocol where they are
used is when specifying the size of the surface.
> > >>> The x,y do not
> > >>> describe how the surface moves, they describe how pixel rows and
> > >>> columns are added or removed on the edges.
> > >
> > > No, it is in the surface coordinate system, like written in the patch.
> >
> > Then I would not describe it as "pixel rows and columns added or removed
> > on the edges". If the scaler is set to 70/50 than a delta of -1,0 is
> > adding 1.4 pixels to the left edge of the buffer. I agree that having it
> > in the parent coordinates works otherwise.
>
> We use the units of "pixels" in the surface coordinate system, even
> if they do not correspond exactly to any "real" pixels like
> elements in a buffer or on screen.
Actually this is sort of a problem. Maybe the docs would be clearer if
we just used a different name for these? "points"?
More information about the wayland-devel
mailing list