I feel configure events and requests are messed up
krh at bitplanet.net
Thu Sep 8 11:31:45 PDT 2011
2011/9/7 Giovanni Campagna <scampa.giovanni at gmail.com>:
> Il giorno mer, 07/09/2011 alle 14.07 -0400, Kristian Høgsberg ha
>> 2011/9/7 Giovanni Campagna <scampa.giovanni at gmail.com>:
>> > Il giorno mar, 06/09/2011 alle 22.54 -0400, Kristian Høgsberg ha
>> > scritto:
>> >> 2011/9/6 Giovanni Campagna <scampa.giovanni at gmail.com>:
>> >> > Il giorno mar, 06/09/2011 alle 17.31 -0400, Kristian Høgsberg ha
>> >> > scritto:
>> >> ...
>> >> >> Maybe you should introduce yourself and what you're working on before
>> >> >> threatening to break protocol? I don't know what you're trying to do.
>> >> >
>> >> > Right, sorry, I should have done it before. Sorry also if my wording has
>> >> > been strong so far, but my reading of the first reply was "it is better
>> >> > like this, and we will never ever change it", even if X11 has done to
>> >> > opposite for more than 20 years, with fairly good results (at least on
>> >> > the management side).
>> >> X has certainly been around for a long time, but that doesn't mean
>> >> that the way it does it is the only way it can be done. But more
>> >> importantly, Wayland is fundamentally different in a few key areas
>> >> that affect window management: the window manager is built into the
>> >> display server, we're always compositing and rendering happens client
>> >> side. So for a start, there are only two processes involved in window
>> >> management (the compositor and the client) vs X server, WM and client.
>> > Well, when it comes to window sizing, not just X but also Win32 and
>> > Quartz have explicit request on the window object for changing the size,
>> > and they acknowledge that when you receive a resize event, all buffers
>> > have already changed size. There is no way for application code to
>> > ignore a resize event (except that queuing another resize request) and
>> > in fact most applications ignore resize events altogether, focusing only
>> > the resulting paint/expose events.
>> >> > Anyway, I'm working on porting Mutter to Wayland,
>> >> Awesome, happy to hear that.
>> >> > and in various parts
>> >> > the current X11 code base assumes that it can resize to whatever size it
>> >> > likes, and that ConfigureRequests from other clients are just that,
>> >> > requests, not orders.
>> >> > There is a fair amount of code (dating back to previous century for a
>> >> > big part) dealing with correctly sizing windows, in presence of
>> >> > constraints, struts, particular user operations, etc. and the window
>> >> > manager simply expects that windows won't get in the way (or if they do,
>> >> > they do very clearly with WM_NORMAL_STATE, so the WM knows it).
>> >> > For example, if the window won't maximize, Mutter will desensitize the
>> >> > maximize item in the window menu and the maximize button in the title
>> >> > bar, as well as not showing the light blue overlay when dragging to the
>> >> > top, and same for edge tiling. All this is not possible if the final
>> >> > decision is in the hands of clients, rather than the compositor.
>> >> Again, I don't see specifically what it is here that you can't do with
>> >> the current protocol. The compositor can resize a surface to whatever
>> >> size it wants, independent of the client. This is something that
>> >> under X would happen between WM and X which is why X has those
>> >> requests, but in Wayland, it all happens internally in the compositor.
>> >> And I think you're overlooking that even on X, resizing a window is
>> >> always a cooperative affair between the WM and client. The client
>> >> always make the final decision on what it renders into the window, no
>> >> matter what size you insist the window is. If you try to force a
>> >> window to be a different size than what the client thinks it should
>> >> be, on Wayland as well as X, you will have to "make up pixels", ie pad
>> >> with a background color or similar hacks. Under X, the X server will
>> >> do that for you with the automatic rendering, under Wayland you have
>> >> to hack it in the compositor. But the point is, you should never have
>> >> to do that, the client should always provide you with a buffer at the
>> >> size you ask for instead.
>> > Yes, but one thing is the toolkit always providing a buffer
>> > appropriately sized, because the protocol mandates so (and therefore the
>> > compositor woulp clip only if the protocol is violate), another is the
>> > client notifying application level code, that independently decides if
>> > he likes the new size.
>> > Currently, in X, Win32, Quartz (possibly other), drawing surfaces are
>> > always setup to clip client-side to the actual window size and there is
>> > no way to remove the clip, because the underlying buffer is already
>> > resized when the application is notified. In wayland, on the other hand,
>> > the buffer is not there until you call wl_egl_window_resize (see the
>> > other email about my position on wl_egl_window) and then make some GL
>> > call to allocate it.
>> > It should be explicitly forbidden to attach a new buffer of the wrong
>> > size. Synchronization on the user resize path should happen by delaying
>> > the attachment of the new buffer until it has been redrawn completely by
>> > app code.
>> You're arguing that Wayland is broken because EGL doesn't clip the
>> user size buffer to what the server thinks it should be, and insist
>> that we add more request and state to the protocol to enforce this?
>> Why? You're back to telling me how Wayland should work, without given
>> a concrete example of something that's not possible. It's more
>> important that clients always know exactly that the buffer size of the
>> EGLSurface they're rendering to corresponds to the size of the window
>> they're rendering. That's the main case, that's what all clients want
>> to do, and that's the part we need to get right. We can control
>> "rogue clients", which seems to be your main concern, just fine either
> You say it: "it's more important that clients always know exactly that
> the buffer size of the EGLSurface they're rendering to corresponds to
> the size of the window". If you don't resize the buffer as soon as the
> window is resized server-side, you end with the client drawing a buffer
> of the wrong size, at which point either the compositor bows to the
> client and draws the entire buffer, or the buffer is clipped.
> The only way to control rogue clients is to ensure that the
> implementation, at the either side of the socket, agrees and enforces
> the same limits - and since policy should not be in the hands of
> low-level libraries, it flows naturally that whatever the compositor
> says, they client must obey.
> A great, positive advantage X11 has, compared to other windowing system,
> is the window manager, that ensures consistent behavior and policy
> across all applications, irrespective of the toolkit used or the Human
> Interface Guidelines of the originating project. xterm resizes like
> konsole, like gnome-terminal, like xfce4-terminal, despite being all
> completely different code bases, and this only thanks to window manager.
> Consistency is what I want to achieve (or rather, to preserve, as this
> is already in X11) - by centralizing all decisions.
I dont see anything in what you say here that argues against the way
Wayland works today.
> You want a concrete example? Consider edge tiling: in that mode, the
> window is not resizable, and attempts to programmatically resize it
> should be cached and reapplied when the window is desnapped. Shall we
> tell the client it is edge tiled? If we go that road, we end up with
> EMWH, trying to specify all possible window states...
Yes, the client needs to know that it can't resize at that time. You
can't force clients to behave a certain way by just clamping their
size, they have to understand that they're being displayed in a
certain way that means they can't currently resize freely. Maybe
we'll need a flag in the configiure event that tells the client "be
this exact size" or maybe the client just needs to know that it's edge
tiled and cant try to resize.
>> >> As for all the struts and snapping logic during resizing you mention,
>> >> what problem do you see there? It should all work with Wayland as
>> >> well; the compositor grabs the pointer when resizing, looks through
>> >> all its constraints and then figure what size it thinks the window
>> >> should be, then sends the configure event to the client to ask for a
>> >> buffer at that size. The client will typically provide that, but the
>> >> client has the option to provide a smaller buffer, in case it wants to
>> >> enforce resizing constraints such as character cell resizing or aspect
>> >> ration or even custom constraints.
>> > Without the equivalent of WM_SIZE_HINTS, the compositor does not know
>> > what constraints a window may place on resizing, and therefore does not
>> > know what actions are allowed (for example, maximizing is not just a
>> > matter of a single flag: it may be allowed in certain work area
>> > configurations and not in other)
>> Sure, as I said, this is one area that hasn't been developed much so
>> far. It doesn't seem like that conflicts with the core resizing
>> protocol though.
> Yes, but when you have told the compositor what are the allowed sizes,
> and the compositor has picked on of those, I would like the protocol to
> say the size is mandatory and will be enforced (again, at either side of
> the socket).
>> > When it comes to character cell resizing, mutter also draws a small
>> > window at the center of the resized one, showing "N x M". I'm not sure
>> > this should be moved to the client-side, but if we keep it in the
>> > compositor, we need the communicate character cells.
>> Yeah, we would need some kind of request there to let the client
>> communicate the "logical size" of the window when it attaches a new
>> >> Finally, wl_shell is far from complete. It's intended to encapsulate
>> >> the interactions desktop clients have with a desktop shell (ie,
>> >> traditional linux desktop scenario) and play the same role for Wayland
>> >> as EWMH does for X. For example, the maximize capability you mention
>> >> needs to be communicated so that when moving the window, mutter will
>> >> know whether or not to show the blue overlay. And many other things
>> >> such as window title and window icon. I've just not tried to tackle
>> >> that, since I don't want to specify that protocol, unless I'm also
>> >> implementing it.
>> > Well, I've prepared a branch locally, that includes everything I think
>> > is needed for window management (including some niceties in the client
>> > side API, like automatic caching of properties). I can send out patches,
>> > but I think a public branch would be easier to review and discuss.
>> Yea, a branch in a repo is definitely easier at this stage, please
>> share the link.
>> As for caching properties, that's something the toolkit should do.
>> The Wayland protocol works by sending out events to describe the state
>> of the objects. When you bind to a global object, the global object
>> will typically send out event to describe its state at bind time and
>> after that, when the object changes state. There are now "getter"
>> requests in the protocol, the assumption is that all state will be
>> cached client side. However, not in libwayland-client.so. That
>> library is just the bare bones protocol implementation, with one
>> exception: the wl_display object. The wl_display object caches the
>> list of global objects and that's all you need to boot strap any state
>> you're interested in.
>> Any toolkit is going to have its own objects that wraps and abstracts
>> out the libwayland-client.so objects and the toolkit should cache
>> object state there and use that to provide answers for entrypoints
>> like gdk_window_get_width(). And maybe that's what makes the client
>> side of Wayland confusing, since when you look at it, there is nothing
>> that keeps track of the size of a window. But that part is supposed
>> to be in the toolkit and the point is then that the toolkit can decide
>> the size it wants to actually render at and then make sure that the
>> entire rendering stack below it agrees on that size instead of
>> resizing in response to resize events that may be out of sync with
>> what the toolkit has seen.
> And that's my problem with it: toolkits needs to be mixed. I'm not
> saying mixing Qt and GDK (although Qt wants to load Gtk to render GNOME
> themes...), but of GDK, Cairo and Cogl. You don't want Cairo to depend
> on Gdk or Cogl (one of the maintainers already told me that this is not
> acceptable, as Cairo wants the maximum portability),
Did you notice cairo_xlib_surface_set_size()? Cairo doesn't magically
listen for X events and resizes the cairo surface behind the toolkits
back. Cairo works the same way.
> and neither Cogl
> probably wants to depend on GDK; but on the other hand, the window
> should be created by GDK and handled by GDK (as most of non-rendering
> stuff, like events, drags and management must be handed by GDK, if not
> by the upper layers), so there should be a way for GDK to hand out some
> sort of "window object" down to Cogl and Cairo, with all necessary state
> and change notifications. And then there is the problem of libEGL, as it
> can't of course depend on any toolkit, but must still preserve state and
> receive notifications.
There is no window object other than GdkWindow in GTK. Gdk manages
the event for the underlying window, and creates a cairo surface at
the size it things the window is. I don't know the details of Clutter
and GDK integration, but I understand you know that area pretty well.
However, in that case, GDK is still going to handle resizing and can
just push the size down to cogl, which then calls
wl_egl_window_resize(). In that way, cogl would work much like cairo.
> In X11, handling a Window XID plus a set of XEvents is enough;
No, in X11 there is a different event stream for resizing the
underlying GLX/EGL surface. In DRI1 it was really bad, since we used
a bit in a shared memory area to notify libGL that the window had
changed and libGL would then do a roundtrip to the server to get the
new size. In DRI2 we're now using a DRI2 invalidate event that
triggers libGL to go out and do a roundtrip to ask for a new set of
buffers in the size that X thinks the window is. X, the wm and the
client are all different processes. At any given time during
interactive or animated resizing, the toolkits idea of size, libGL
idea and the X servers idea is likely to be out of sync.
We don't want the EGL buffers to match the latest size from the
server, we want them to match the size that the client saw when it
scheduled its redraw. Wayland resizing is designed so that there's
one predictable flow or pipeline of resize events instead of hitting
different parts of the stack with different resize events at different
times: The new size comes from the compositor (in case of interactive
resizing) or the clients animation framework (in case of animated
resizing). The toolkit receives that size and relayouts widgets and
schedules a redraw. The redraw triggers and tells the rendering
library what size we want the surface to be, renders the frame and
then finally presents the frame to the compositor, which atomically
updates the contents, size and position of the surface.
And I suspect this talk about "letting the toolkit have the final say"
is what is making you see red, but realize I'm not arguing that we
should let clients go crazy. Well behaved clients will allocate the
size they're asked to, render a new frame and send that back to the
compositor. That's what pretty much what all clients will do and
that's the case we have to optimize for. The case where a badly
behaved client tries to use a wrong size is going to look bad when we
clip it, whether we do it in client side EGL or in the compositor.
More information about the wayland-devel