<p dir="ltr">Just a couple quick comments below.</p>
<p dir="ltr">I can't fin where this goes, so I'm putting it here: Why are we having compositors send an initial configure event again? Given that we have a serial, tiling compositors can just send a configure and wait until the client responds to the event. Non-tiling compositors will just send 0x0 for "I don't care" right? I seem to recall something about saving a repaint on application launch in the tiling case but I don't remember that well. I'm not sure that I like the required roundtrip any more than the extra repaint. It's probably a wash.<br>
On Aug 22, 2014 1:48 PM, "Jasper St. Pierre" <<a href="mailto:jstpierre@mecheye.net">jstpierre@mecheye.net</a>> wrote:<br>
><br>
> On Wed, Aug 6, 2014 at 9:39 AM, Pekka Paalanen <<a href="mailto:ppaalanen@gmail.com">ppaalanen@gmail.com</a>> wrote:<br>
>><br>
>> On Thu, 17 Jul 2014 17:57:45 -0400<br>
>> "Jasper St. Pierre" <<a href="mailto:jstpierre@mecheye.net">jstpierre@mecheye.net</a>> wrote:<br>
>><br>
>> Hi Jasper,<br>
>><br>
>> I am not reviewing this patch now. Instead, I took xdg-shell.xml from<br>
>> Weston master as today, and I'm giving comments on it, as you have<br>
>> declared it practically ready for stabilization.<br>
>><br>
>> In irc, I remember you mentioned a few more things you'd like to solve,<br>
>> like size hints, but I won't specifically comment about those, unless<br>
>> it comes up through what's below.<br>
>><br>
>> I am not too familiar in how desktops work, what is the state of the<br>
>> art on X11, or even all of the concepts, so please allow me some room<br>
>> for inaccuracy.<br>
>><br>
>> I am reading this protocol specification only, and I am not looking at<br>
>> the implementation bits at all. I believe the protocol spec should be<br>
>> enough for specifying everything, even though we don't have a good track<br>
>> record with that.<br>
>><br>
>> Almost all questions and comments I give should probably lead to some<br>
>> changes or additions in the documentation answering those questions.<br>
><br>
><br>
> Hey Pekka,<br>
><br>
> I'm going to answer inline here. Anything I've snipped out I agree with and have already fixed. I've started on a rewrite of the documentation in here.<br>
> <br>
>><br>
>> This is a single global, yes? What happens if a client binds to this<br>
>> global multiple times, therefore creating multiple xdg_shell objects?<br>
>> The ping/pong spec needs to define what happens there.<br>
><br>
><br>
> Ping/pongs are on xdg_shell objects, not on surfaces. Semantics of whether a client is "active" meaning whether all of its xdg_shell objects respond to pings or just one of them is not defined here. See below.<br>
> <br>
>><br>
>> Does the compositor have to track, which xdg_* object has been created<br>
>> from which xdg_shell object? For ping/pong purposes?<br>
><br>
><br>
> Compositors might save all the xdg_shell objects they've created, and every minute ping every one. "Pinging a surface" may involve pinging the xdg_shell object that created the surface, or pinging all xdg_shell objects for the client. Weston implements the former, mutter currently implements the latter. I may eventually change mutter to do the former as well.<br>
> <br>
>><br>
>> > Only one shell or popup surface can be associated with a given<br>
>> > surface.<br>
>> > </description><br>
>> > <arg name="id" type="new_id" interface="xdg_surface"/><br>
>> > <arg name="surface" type="object" interface="wl_surface"/><br>
>><br>
>> Need to define an error case for 'surface' already having a role.<br>
><br>
><br>
> This has been solved in a generic way by patches on the list.<br>
> <br>
> Where does a client get the serial it needs to pass here?<br>
>><br>
>> How does the compositor process the serial?<br>
><br>
><br>
> See the xdg_popup documentation for the explanation of these parameters. I stripped out all the local text in the "get_xdg_*" method and just said "see xdg_* documentation", so we can put this in a central place and point people in the right location.<br>
> <br>
>><br>
>> > <arg name="flags" type="uint"/><br>
>><br>
>> What are these flags? Are there no flags defined yet? Would passing an<br>
>> unknown flag cause a protocol error?<br>
><br>
><br>
> Let's just remove the flags. This stemmed from wl_shell_surface, where somebody had a flag for transients, but none for popups. I don't expect us to have flags any time soon, either.<br>
> <br>
>><br>
>> This talks about unresponsive, the ping talks about alive.<br>
><br>
><br>
> Are the two different?<br>
> <br>
>><br>
>> > It provides requests to treat surfaces like windows, allowing to set<br>
>> > properties like maximized, fullscreen, minimized, and to move and resize<br>
>> > them, and associate metadata like title and app id.<br>
>> ><br>
>> > On the server side the object is automatically destroyed when<br>
>> > the related wl_surface is destroyed. On client side,<br>
>> > xdg_surface.destroy() must be called before destroying<br>
>> > the wl_surface object.<br>
>><br>
>> This seems to conflict the description of 'destroy' request below.<br>
>> Instead of this, we could explain what happens if the wl_surface is<br>
>> destroyed before its xdg_surface.<br>
><br>
><br>
> Protocol error. I've made this clearer in both places now.<br>
> <br>
>><br>
>> What about surface/window mapping? To get a window mapped, is it enough<br>
>> to just create a xdg_surface and commit connect to the wl_surface? Does<br>
>> the order of these operations matter?<br>
><br>
><br>
> The way I've always viewed a surface, it's a bag of bits that has some existing state, like the current buffer attached, the input/opaque regions, and some others. Creating a role just does something with that state like putting it on the screen. Committing a surface shouldn't depend on the role of the surface, unless it's in another mode (like synchronized subsurfaces).<br>
><br>
> This is obviously not how reality works (cursor surfaces seem to reset their absolute attachment offset when calling set_pointer), but I'd like to consider anything else as a weird special case, and we should strive to not make more of those. You or Kristian can veto me here if you disagree :)<br>
><br>
> So no, it doesn't matter when commit is called in comparison to giving it a role. Mapping the window means that it has a window role (xdg_surface, for example) and that it has a non-NULL buffer attached.<br>
><br>
> That said, mutter implements the semantics right now that if you create an xdg_surface, and then at some point commit the surface with no content currently attached to the surface, it's a protocol error. I don't like the idea of attaching a NULL buffer to unmap a window surface, but it's weird that the behavior of commit changes based on the existence of a role object.<br>
><br>
>> If the order matters, should xdg_shell.get_xdg_surface raise a protocol<br>
>> error if the wl_surface alrady has content (i.e. a non-NULL wl_buffer<br>
>> committed)?<br>
><br>
><br>
> No.<br>
> <br>
>><br>
>> What about the initial size of the window? Does this interface<br>
>> automatically deliver any events the moment the protocol object has<br>
>> been created?<br>
><br>
><br>
> Thanks for reminding me about this. This is something we need to stick in here at the last minute: an initial configure to the client after get_xdg_surface should be mandated.<br>
> <br>
>><br>
>> > <request name="destroy" type="destructor"><br>
>> > <description summary="remove xdg_surface interface"><br>
>> > The xdg_surface interface is removed from the wl_surface object<br>
>> > that was turned into a xdg_surface with<br>
>> > xdg_shell.get_xdg_surface request. The xdg_surface properties,<br>
>> > like maximized and fullscreen, are lost. The wl_surface loses<br>
>> > its role as a xdg_surface. The wl_surface is unmapped.<br>
>> > </description><br>
>><br>
>> Very good.<br>
>><br>
>> What about, say, xdg_popup surfaces associated to this one?<br>
><br>
><br>
> Yeah, we should unmap those too.<br>
> <br>
>><br>
>> I think 'set_parent' already describes what happens with xdg_surfaces<br>
>> who had 'set_parent' called with this surface as argument.<br>
><br>
><br>
> This is actually something I changed to match mutter's semantics: it's illegal to unmap a parent window before unmapping a child window.<br>
> <br>
>><br>
>> > </request><br>
>> ><br>
>> > <request name="set_parent"><br>
>> > <description summary="surface is a child of another surface"><br>
>> > Child surfaces are stacked above their parents, and will be<br>
>> > unmapped if the parent is unmapped too. They should not appear<br>
>> > on task bars and alt+tab.<br>
>><br>
>> Any other behavioral changes compared to an xdg_surface without this<br>
>> request called?<br>
><br>
><br>
> The "taskbar / alt-tab" thing is actually wrong, I think, and stemmed from transient windows. mutter might hide certain kind of "child windows" (ugh, I hate this term in the context of subsurfaces) from appearing on the taskbar, but Wayland can't get those types yet. Perhaps this is an idea for a future addti<br>
> <br>
>><br>
>> > </description><br>
>> > <arg name="parent" type="object" interface="wl_surface" allow-null="true"/><br>
>><br>
>> What kind of wl_surfaces are allowed here, what raises a protocol error?<br>
>><br>
>> > </request><br>
>> ><br>
>> > <request name="set_app_id"><br>
>> > <description summary="set surface class"><br>
>><br>
>> Id or class? Pick one. I would kind of prefer not id, because it is<br>
>> quite overloaded term, but if it is the term what DEs nowdays use, then<br>
>> so be it.<br>
><br>
><br>
> Old documentation, came from wl_shell_surface, which was reusing terminology from X11: WM_CLASS. Fixed.<br>
> <br>
>><br>
>> > Set an id for the surface.<br>
>> ><br>
>> > The app id identifies the general class of applications to which<br>
>> > the surface belongs.<br>
>> ><br>
>> > It should be the ID that appears in the new desktop entry<br>
>> > specification, the interface name.<br>
>><br>
>> Could we have a reference to the specification here?<br>
>><br>
>> From some very old memory, I thought this would be the .desktop file<br>
>> path, but apparently this is the... the... dbus interface name like<br>
>> thing, right? Like "org.gnome.shotwell" or whatever.<br>
>><br>
>> And the DE will use this app_id to find the .desktop file to get an<br>
>> icon for this window, etc.?<br>
><br>
><br>
> Yeah. I've updated the documentation to make this clearer.<br>
> <br>
>><br>
>> > </description><br>
>> > <arg name="app_id" type="string"/><br>
>><br>
>> Are there any cases where a string could raise a protocol error?<br>
><br>
><br>
> No.<br>
> <br>
>><br>
>> > <request name="show_window_menu"><br>
>> > <description summary="show the window menu"><br>
>> > Clients implementing client-side decorations might want to show<br>
>> > a context menu when right-clicking on the decorations, giving the<br>
>> > user a menu that they can use to maximize or minimize the window.<br>
>> ><br>
>> > This request asks the compositor to pop up such a window menu at<br>
>> > the given position, relative to the parent surface. There are<br>
>> > no guarantees as to what the window menu contains.<br>
>> ><br>
>> > Your surface must have focus on the seat passed in to pop up the<br>
>> > window menu.<br>
>><br>
>> Is any focus, e.g. pointer, really enough? I.e. just moving the cursor<br>
>> over a window would allow it to pop this up, without needing a<br>
>> button-down? I see for instance wl_pointer.enter carries a serial, will<br>
>> that do?<br>
><br>
><br>
> I've reworded this (and all the other places we talk about grabbing and use wl_seat/serial pairs) to mention that it must be in response to a user action. The implementation in mutter checks that it's in response to a pointer or touch event. The fact that it doesn't allow keyboard is probably a bug.<br>
> <br>
>><br>
>> How will be client know, when the window menu is gone?<br>
><br>
><br>
> Why does it need to know? This is just like some other compositor popup came up and stole focus from it. Or screen lock, or...<br>
><br>
>> Oh btw. we still don't have the popup placement negotiation protocol<br>
>> (to avoid popup going off-screen), but the draft I read a long time ago<br>
>> assumed, that the client knows in advance the size of the popup<br>
>> surface. We don't know the size here. I'm not sure if solving that<br>
>> would change something here.<br>
><br>
><br>
> The compositor can pivot the menu around a point, which is very likely going to be the current cursor position. Specifying a box would be overall more correct if the popup instead stemmed from a button (so it could appear on all sides of the box) but I don't imagine that clients will ever use this on a button. We could add it for completeness if you're really concerned.</p>
<p dir="ltr">Just thinking out loud here but why not just have the client send a list of locations in order of preference. That way the client can make its popups more interesting without breaking the world. Also, the client probably wants feedback of where ilthe popup is going to be before it renders. I'm thinking about GTK's little popup boxes with the little pointer thing that points to whatever you gicked to get the menu. (I know they have a name, I just can't remember it right now. They're all over the place in GNOME shell.)</p>
<p dir="ltr">> <br>
>><br>
>> > </request><br>
>> ><br>
>> > <request name="move"><br>
>> > <description summary="start an interactive move"><br>
>> > Start a pointer-driven move of the surface.<br>
>><br>
>> or touch, or maybe even keyboard?<br>
>> Could we implement keyboard-based window movement with this?<br>
><br>
><br>
> Currently, both mutter and weston check to see that either the pointer or touch are pressed. This is designed for the "I pressed the titlebar" case. It could be used for a keyboard shortcut for initiating window move, but we implement that as a compositor keybinding. Use cases for client-initiated keyboard move are welcome.<br>
><br>
>> > This request must be used in response to a button press event.<br>
>> > The server may ignore move requests depending on the state of<br>
>> > the surface (e.g. fullscreen or maximized).<br>
>><br>
>> Again, how does the client know, when the move has ended?<br>
><br>
><br>
> Should it care? The compositor can move the window at any time it wants, without the client knowing. I don't see the value in telling the client when it has moved, but only sometimes.<br>
> <br>
>><br>
>> > </request><br>
>> ><br>
>> > <enum name="resize_edge"><br>
>> > <description summary="edge values for resizing"><br>
>> > These values are used to indicate which edge of a surface<br>
>> > is being dragged in a resize operation. The server may<br>
>> > use this information to adapt its behavior, e.g. choose<br>
>> > an appropriate cursor image.<br>
>><br>
>> And automatically reposition the surface accordingly? (Resize from<br>
>> top/left.)<br>
><br>
><br>
> Surfaces don't care about positioning, so it doesn't need to know.<br>
> <br>
>><br>
>> > </description><br>
>> > <entry name="none" value="0"/><br>
>> > <entry name="top" value="1"/><br>
>> > <entry name="bottom" value="2"/><br>
>> > <entry name="left" value="4"/><br>
>> > <entry name="top_left" value="5"/><br>
>> > <entry name="bottom_left" value="6"/><br>
>> > <entry name="right" value="8"/><br>
>> > <entry name="top_right" value="9"/><br>
>> > <entry name="bottom_right" value="10"/><br>
>><br>
>> No use for centered resize, e.g. left_right?<br>
><br>
><br>
> You might recognize that the enum is actually a bitfield. Clever users might notice this and try to resize with a value like LEFT | RIGHT. Clever users get protocol errors.<br>
><br>
> If you know of a WM that uses centered resize, let me know and I'll check it out. I can't imagine a good UI for this.</p>
<p dir="ltr">I'm pretty sure I've seen this somewhere actually. I think it was a compiz plugin. In any case, it's not in any of the usual ones and probably won't be baked into toolkits.</p>
<p dir="ltr">> <br>
>><br>
>> What about touch and keyboard, not just pointer? Or any other input<br>
>> style. Resize with keyboard might be a nice proof of concept, but with<br>
>> this method rather than what weston-resizor does, no?<br>
><br>
><br>
> That's implemented by a WM keybinding. Again, this is more to initiating resize dragging from decoration edges.<br>
> <br>
>><br>
>> So with this, the compositor is handling the repositioning of the<br>
>> surface automatically, is it not? No need for the client to fiddle with<br>
>> wl.surface.attach arguments x,y?<br>
><br>
><br>
> Yeah, we should probably mention that the surface is anchored. In fact, attach x/y are completely ignored when in the "resizing" state. I keep discovering more reasons to hate attach x/y.</p>
<p dir="ltr">The attach.x/y are supposed to tell the compositor how the surface resized. There are a lot of places we ignore it including whenever the surface is full-screen. If you specify in the spec that the resize is pinned by the compositor, that's fine. I would agree that set_cursor is the wonky one hear; at least given the way Tue spec is worded.</p>
<p dir="ltr">> <br>
>><br>
>> Could also mention, that the client will be receiving configure events<br>
>> while interactive resizing happens, and when the operation ends.<br>
><br>
><br>
> Sure.<br>
><br>
>> > </description><br>
>> > <entry name="maximized" value="1" summary="the surface is maximized"><br>
>> > The surface is maximized. The window geometry specified in the configure<br>
>> > event must be obeyed by the client.<br>
>><br>
>> I suppose "the configure event" really means "the latest configure<br>
>> event the client has received"? A client may receive many configure<br>
>> events before it gets to repainting, right?<br>
>><br>
>> > </entry><br>
>> > <entry name="fullscreen" value="2" summary="the surface is fullscreen"><br>
>> > The surface is fullscreen. The window geometry specified in the configure<br>
>> > event must be obeyed by the client.<br>
>><br>
>> Really? So, will we rely on wl_viewport for scaling low-res apps to<br>
>> fullscreen? No provision for automatic black borders in aspect ratio or<br>
>> size mismatch, even if the display hardware would be able to generate<br>
>> those for free while scanning out the client buffer, bypassing<br>
>> compositing?<br>
><br>
><br>
>> Since we have a big space for these states, I suppose we could do those<br>
>> mismatch cases in separate and explicit state variants of fullscreen,<br>
>> could we not?<br>
><br>
><br>
> I explicitly removed this feature from the first draft of the patch simply to make my life easier as a compositor writer. We could add additional states for this, or break fullscreen into multiple states: "fullscreen + size_strict" or "fullscreen + size_loose" or something.<br>
><br>
> I am not familiar enough with the sixteen different configurations of the old fullscreen system to make an informed decision about what most clients want. Your help and experience is very appreciated. I'm not keen to add back the matrix of configuration options.</p>
<p dir="ltr">Yeah, not sure what to do here. I like the idea of the compositor doing it for the client. Sure, you could use wl_viewport, subsurfaces, and a couple black surfaces for letterboxing. However that is going to be far more difficult for the compositor to translate into overlays/planes than just the one surface and some scaling instructions.</p>
<p dir="ltr">><br>
>> > </entry><br>
>> > <entry name="resizing" value="3"><br>
>> > The surface is being resized. The window geometry specified in the<br>
>> > configure event is a maximum; the client cannot resize beyond it.<br>
>> > Clients that have aspect ratio or cell sizing configuration can use<br>
>> > a smaller size, however.<br>
>><br>
>> Oh, there is an explicit state for this, ok.<br>
><br>
><br>
> This is about the interactive resizing. The comment makes me think you might be confused and think it's related to fullscreen.<br>
> <br>
>><br>
>> > The width and height arguments specify a hint to the window<br>
>> > about how its surface should be resized in window geometry<br>
>> > coordinates. The states listed in the event specify how the<br>
>> > width/height arguments should be interpreted.<br>
>><br>
>> The window geometry coordinates are not clearly specified. Are they<br>
>> defined by the set_window_geometry such that 0,0 here means x,y there?<br>
><br>
><br>
> What this means is that if you get a configure event with a width/height of let's say 640x480, then the window geometry of the window should be 640x480. If you have 10px shadows around the edges of your window, then the buffer will be 660x500, and you will call set_window_geometry(10, 10, 640, 480).<br>
><br>
> It doesn't mean anything about the current geometry set. It means something about the new window geometry you should be setting.<br>
><br>
> See the documentation of set_window_geometry for the definition of "window geometry coordinate space" and the rationale behind it.<br>
> <br>
>><br>
>> > A client should arrange a new surface, and then send a<br>
>> > ack_configure request with the serial sent in this configure<br>
>> > event before attaching a new surface.<br>
>><br>
>> "Before attaching a new surface" is vague. Does this mean "immediately<br>
>> before issuing wl_surface.commit on the associated wl_surface"?<br>
>> One might also argue about "immediately", I think your point is that<br>
>> the very next commit after a ack_configure applies all the changes<br>
>> suggested by that configure. Right?<br>
><br>
><br>
> It allows compositors to know where in the event stream this client is. For instance, the compositor might want to maximize a constantly animating surface. To do that, it will need to move the surface's view to the top-left of the screen, but in order to know when it can move the view, it needs to know when the proper buffer is attached so we don't see a two-step change of the surface moving to the top-left, then becoming maximized. The surface is constantly animating, so there will be extra attaches between the round-trips of "set_maximized", "configure" and then "attach/commit".<br>
><br>
> The compositor could use heuristics like "when I see window geometry equivalent to what I sent out, then assume it's the surface we're waiting on", but better design this into the protocol.<br>
><br>
> (Note that this isn't specific to constantly animating surfaces. When releasing the maximize button in weston-terminal, it first attaches a new surface of the maximize button going from the pressed state back to hover state at the small size, before it receives the configure. The compositor sees two attaches: small window with hover, and big maximized window.)<br>
> <br>
>><br>
>> What should a client do, if it receives states it does not understand?<br>
><br>
><br>
> Ignore it.<br>
> <br>
>><br>
>> What if the compositor sends a state that demands something, and the<br>
>> client does not obey the demand, because a) the client is broken, or b)<br>
>> the client is not broken, but does not know the state?<br>
><br>
><br>
> There was a plan for a complex handshaking protocol, but I'm tempted to say that clients must understand all core states (up to their surface version), and clients need to ignore states in extensions they don't understand. Compositors that send core states that are below a surface's version are broken.<br>
><br>
> Handshaking for extension states is done by some other unspecified mechanism. In mutter/GTK+, we use the existence of a bound global from our private protocol (gtk_shell) to know the client knows about the state.<br>
><br>
>> > <arg name="serial" type="uint"/><br>
>><br>
>> This serial is in a completely new "serial name space", isn't it?<br>
>> Not to be confused with e.g. input serials.<br>
><br>
><br>
> It's a serial. wl_display_next_serial serial. I thought a serial was a serial was a serial.<br>
><br>
>> > </event><br>
>> ><br>
>> > <request name="ack_configure"><br>
>> > <description summary="ack a configure event"><br>
>> > When a configure event is received, a client should then ack it<br>
>> > using the ack_configure request to ensure that the compositor<br>
>> > knows the client has seen the event.<br>
>> ><br>
>> > By this point, the state is confirmed, and the next attach should<br>
>> > contain the buffer drawn for the configure event you are acking.<br>
>><br>
>> Again, "attach" is vague, you probably mean wl_surface.commit.<br>
>><br>
>> Is it allowed to skip acks? If a client receives configure events with<br>
>> serials 3,4,5, can it just ack 5 directly?<br>
><br>
><br>
> Yes. I thought we had this written down somewhere.<br>
> <br>
>><br>
>> What if you ack 4, ack 5, without a commit in between?<br>
><br>
><br>
> Right now, we just update the request for the new serial.<br>
> <br>
>><br>
>> What if you ack 5, ack 3 (going backwards, i.e. client bug), would<br>
>> there be a protocol error?<br>
><br>
><br>
> No. Again, right now we just update the serial that we'll use on the client. Keep in mind that it could be correct if the serial has wrapped around.<br>
> <br>
>><br>
>> Or if you ack a serial the compositor has<br>
>> not sent yet? Or ack the same serial twice?<br>
><br>
><br>
> Right now, we only use this information for pending moves, so surfaces might move too early or too late. I don't really care what happens to illegal clients, but we can't really validate this properly. Is "undefined behavior", which perhaps includes a protocol error, OK to you?<br>
> <br>
>><br>
>> > </description><br>
>> > <arg name="x" type="int"/><br>
>> > <arg name="y" type="int"/><br>
>> > <arg name="width" type="int"/><br>
>> > <arg name="height" type="int"/><br>
><br>
> <br>
>><br>
>> Is there a protocol error, if width or height <= 0?<br>
><br>
><br>
> Let's say there is, even if there isn't one right now.<br>
> <br>
>><br>
>> > </request><br>
>> ><br>
>> > <request name="set_maximized" /><br>
>> > <request name="unset_maximized" /><br>
>><br>
>> These two requests are undocumented.<br>
><br>
><br>
> ... and? Do we need documentation here? It would just be <description summary="set the window as maximized">Maximize the surface</description>. Documentation by reshuffling words isn't any more useful than no documentation.<br>
> <br>
>><br>
>> ><br>
>> > <request name="set_fullscreen"><br>
>> > <description summary="set the window as fullscreen on a monitor"><br>
>> > Make the surface fullscreen.<br>
>> ><br>
>> > You can specify an output that you would prefer to be fullscreen.<br>
>> > If this value is NULL, it's up to the compositor to choose which<br>
>> > display will be used to map this surface.<br>
>> > </description><br>
>> > <arg name="output" type="object" interface="wl_output" allow-null="true"/><br>
>><br>
>> The issue I raised at the fullscreen state enum... otherwise ok.<br>
>><br>
>> > </request><br>
>> > <request name="unset_fullscreen" /><br>
>> ><br>
>> > <request name="set_minimized" /><br>
>><br>
>> And a couple more undocumented requests. Maybe answer the FAQ "where is<br>
>> my unset_minimized?" I.e. how do you get out of minimized, since there<br>
>> is no obvious request.<br>
>><br>
>> I would like to see an overview of how this state system works. Can a<br>
>> client issue set_maximized at any time? What happens then? At what<br>
>> point does the window get drawn in maximized state?<br>
><br>
><br>
> Calling set_maximized requests for the compositor to make the surface maximized. The compositor will then send the surface a configure event containing the correct set of states ("maximized", "activated"), and the correct width/height (640x480). The client then takes this information, draws its new state, attaches it, calls ack_configure, and then commits.<br>
><br>
> I'm not sure where we should put this overview. Perhaps in xdg_surface?<br>
><br>
>> Can the compositor choose to set arbitrary states on its own will?<br>
><br>
> <br>
> Yes. If the user hits the maximize keybinding, the client will get a configure event out of the blue telling the surface to maximize itself.<br>
><br>
>> > The popup grab continues until the window is destroyed or a mouse<br>
>> > button is pressed in any other clients window. A click in any of<br>
>> > the clients surfaces is reported as normal, however, clicks in<br>
>> > other clients surfaces will be discarded and trigger the callback.<br>
>><br>
>> So the client will be getting input 'enter' and 'leave' events as<br>
>> usual, but only this client?<br>
><br>
><br>
> Yep. An "owner-events grab" (X11 parlance) means that the client receives events for all its surfaces, not just the popup. This is so that when navigating through traditional submenus, the user can navigate back over to the submenu to the left, and that will receive motion events, allowing the client to start the timer and dismiss the top menu popup.<br>
><br>
> I rewrote the documentation to make this and the other concerns clearer.<br>
><br>
>> What if the user alt+tabs during a popup?<br>
>><br>
>> When a popup is open, are any xdg_surface operations on the client's<br>
>> other surfaces invalid? Can a client create a second popup? What<br>
>> happens then?<br>
><br>
><br>
> Submenus work by nesting xdg_popups. So, a client creates an xdg_popup B with parent xdg_surface A, then xdg_popup C with parent xdg_popup B, etc.<br>
><br>
> I don't know what it means to have two popups for an xdg_surface. We can make up some strict semantics for now. Let's say it's a protocol error to make more than one popup, if it's not parented to the existing topmost popup.<br>
><br>
>> ><br>
>> > The x and y arguments specify the locations of the upper left<br>
>> > corner of the surface relative to the upper left corner of the<br>
>> > parent surface, in surface local coordinates.<br>
>><br>
>> This should go with the get_xdg_popup, no?<br>
><br>
><br>
> I just put "see xdg_popup" in the docs for "get_xdg_popup".<br>
> <br>
>><br>
>> > </request><br>
>> ><br>
>> > <event name="popup_done"><br>
>> > <description summary="popup interaction is done"><br>
>> > The popup_done event is sent out when a popup grab is broken,<br>
>> > that is, when the users clicks a surface that doesn't belong<br>
>> > to the client owning the popup surface.<br>
>><br>
>> "The client should destroy the xdg_popup object." I presume?<br>
>> Will re-use of the wl_surface be allowed?<br>
><br>
><br>
> You can call get_xdg_popup on the same wl_surface again, but you can't transform it into any other role.<br>
> <br>
>><br>
>> > </description><br>
>> > <arg name="serial" type="uint" summary="serial of the implicit grab on the pointer"/><br>
>><br>
>> What is this serial used for?<br>
>><br>
>> Oh, is it because the client might not get a input 'enter' event if the<br>
>> input focus is already on some other surface of the client?<br>
><br>
><br>
> No. It's the serial of the *implicit grab on the pointer* -- that is, the serial we passed to get_xdg_popup. I have no idea why it's here, actually. Nothing uses it. Removed it in my local branch.<br>
> <br>
>><br>
>> > </event><br>
>> ><br>
>> > </interface><br>
>> > </protocol><br>
>><br>
>><br>
>> Okay, phew. Nothing drastically broken that I can see, if not maybe that<br>
>> missing destructor. :-)<br>
><br>
><br>
> Thank you for your really good review. You really took the time and thought it through. I'll have a few more patches going out soon that rewrites the docs for xdg-shell and corrects all of the issues mentioned.<br>
><br>
>><br>
>> Lots and lots of details to specify, though.<br>
><br>
><br>
> That's the danger thing: making a spec so dense and thorough it becomes unreadable.<br>
> <br>
>><br>
>><br>
>> Thanks,<br>
>> pq<br>
><br>
><br>
><br>
><br>
> -- <br>
> Jasper<br>
><br>
> _______________________________________________<br>
> wayland-devel mailing list<br>
> <a href="mailto:wayland-devel@lists.freedesktop.org">wayland-devel@lists.freedesktop.org</a><br>
> <a href="http://lists.freedesktop.org/mailman/listinfo/wayland-devel">http://lists.freedesktop.org/mailman/listinfo/wayland-devel</a><br>
><br>
</p>