[PATCH wayland v3] protocol: Add minimize/maximize protocol

Jason Ekstrand jason at jlekstrand.net
Thu Mar 21 08:39:20 PDT 2013


Hi Scott,

> One important thing to note here is that client != surface. In fact,
> clients can have multiple surfaces. We might need to keep this in mind
> for things like closing single surfaces demonstrated here
> https://github.com/soreau/wayland/commit/65f8a3f5f683c3a91913a26496cc373633f01896

Yes, particularly for the closing case.

> This allows us to tell a client that 'the user has indicated that they
> would like one of your surfaces to be closed (this one)'. By way of
> contrast, the current code kills the entire client and all of its
> surfaces. Unless I am not understanding correctly, we don't have a way
> to tell a client to kill only one (or more) of its surfaces with the
> current protocol. It might be a good idea to write a test client that
> simply has two surfaces from the same client to exercise these cases
> (unless there isn't one already). We've been testing with
> google-chrome mostly.

This should be taken care of by the adding a "request_close" surface
event (see previous email).

>> The point here is that none of this should get
>> implemented by the client removing the buffer from the surface in
>> order to unmap it.  That's unacceptable.  Instead, it should happen in
>> terms of modes.
>
> I don't know what you mean by this really. Two questions at least:
> 1) What do you mean by modes exactly?
> 2) What would you do instead of placing it from the drawn list to a
> dedicated minimized surface list?

This comment merely meant that the client shouldn't minimize a window
by attaching a null buffer.  Instead, it should still be a
fully-functioning window (possibly receiving frame events) even though
it's minimized.

>> There is one more question that I think needs to be answered.  And
>> that is: do we handle things in terms of set/unset or in terms of
>> set_maximized, set_fullscreen, set_minimized, and set_normal (probably
>> want a better name for that one).  Really, which of those two we do
>> doesn't matter that much because the toolkit can force it either way.
>> It's mostly a matter of who tracks the state and handles it.  I think
>> I like simply setting the state instead of keeping track of set/unset
>> better, but I'm open to discussion on that.
>
> I have this all working in gh next. The only thing left to consider
> that I can think of is: Do we want to support 'unmaximizing or
> unminimizing (or unfullscreening) a surface retains stacking state'.
> So basically if there is a bottom-level surface and you state change
> it then toggle it back, do we always want it on top no matter what? Or
> do we want to optionally support retaining stacking order on state
> restore (setting back to 'normal'). If we want to support this
> feature, then a new un* request is required for each state set
> request. I move that we do support this feature and I'm working on
> this in gh next.

I think stacking should be an orthogonal interface.  If we want client
control for it, we need an interface.  Otherwise, it should be left as
an implementation detail.  I don't think we want to dirty the min/max
protocol with stacking details.

> One other question, do we want to support fullscreen
> from a source other than the client? For instance we could have
> fullscreen as a selection in the drop down menu? I guess maximize and
> minimize are expected features, fullscreen is optional. For this, we'd
> need (un)fullscreen events.  Hm, I wonder if there's a way to have the
> client tell us what states it actually supports so we can correctly
> represent this in the panel taskbar menus..

There's no precedent for the compositor full-screening things. Also,
unless the client is specifically designed to full-screen, you won't
be able to get out of it.  For these two reasons, I think we should
leave that entirely up to the client.

> After looking at the code in a working state, it's far clearer to have
> explicit $state and un$state events/request because there are a lot of
> paths in the code we have to run through to make this all syn up and
> work out properly. Using a state variable to simplify the protocol
> will likely complicate the description in the protocol and complicate
> the code.

Ok, I think you completely misunderstood what I was trying to say
here.  My point is that min/max is a state machine.  A window, from
the user's perspective, is minimized, maximized, fullscreen, or
normal.  It is never two of those at the same time.  My question was
about whether, from a protocol perspective, it should be handled in
terms of setting/unsetting one flag per potential state or whether the
client should simply tell the compositor what state to go to next.
For example, if the window is maximized and you click the unmaximize
button, the client will send the set_normal request instead of
unset_maximized.

This approach has a number of benefits.  First, it simplifies the
protocol: fewer requests.  Second, it removes most of the state from
the compositor and lets the toolkits deal with it.  This is especially
useful when the toolkit may be messing with other windows than the
current one; in that case, the toolkit would have to keep track of all
that state itself anyway.  Also, it removes any ambiguity as to what
is going on inside the compositor and keeps the client in control; the
client always knows the window's state.

As a note on set_toplevel.  We need to do some thinking here.  It came
up on IRC some time back that we may want to have toplevel be a role
(like subsurface) rather than a flag.  Now, when we're adding this
max/min jazz might be the time to do that.  In that case, I think a
toplevel would be one role and a toplevel surface would have
maximized, minimized, and normal states while full-screen would be its
own role.

If we're re-arranging things like this we could even do it all with a
single set_state request and an enum. (Just a thought, not sure if I
like it.)

> 1) If we always add protocol to the end, it will likely be incoherent,
> unmatched and not very easy to read. I know wl_shell interface is
> disposable but for the sake of clarity how wayland protocol versioning
> system, I'd like to know what the expected convention is.
> 2) I noticed that changing the version requires no changes client
> side. How is this supposed to work?
> 3) The protocol semantics were recently changed. When semantics are
> changed of existing protocol does this not constitute an interface
> version bump?
1) They always have to go at the end.  Otherwise, older clients will
get confused as to which event you are sending and which request they
are sending.
2) I'm not sure.
3) I'm not sure on some of these things.  However, we should only have
to bump the interface version once we've released the new interface.


More information about the wayland-devel mailing list