[RFC] Sub-surface protocol and implementation v1

Pekka Paalanen ppaalanen at gmail.com
Thu Jan 10 00:49:22 PST 2013


On Wed, 9 Jan 2013 18:14:12 +0100
John Kåre Alsaker <john.kare.alsaker at gmail.com> wrote:

> On Wed, Jan 9, 2013 at 10:53 AM, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> > On Tue, 8 Jan 2013 21:50:20 +0100
> > John Kåre Alsaker <john.kare.alsaker at gmail.com> wrote:
> >
> >> My goals for a subsurface implementation are these:
> >> - Allow nesting to ease interoperability for client side code.
> >> - Allow a surface without any content to have an input region and let
> >> the content be presented in a number of adjacent subsurfaces. This
> >> would simplify input handling by a lot.
> >> - Allow clients to commit a set of surfaces.
> >>
> >> On Tue, Jan 8, 2013 at 8:50 AM, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> >> >
> >> > On Mon, 7 Jan 2013 16:56:47 +0100
> >> > John Kåre Alsaker <john.kare.alsaker at gmail.com> wrote:
> >> >
> >> >> On Fri, Dec 21, 2012 at 12:56 PM, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> >> >> > - how should commits work in parent vs. sub-surfaces?
> >> >> Commit should work the same way. It should commit itself and all it's
> >> >> children. Furthermore it should commit all reordering of it's
> >> >> immediate children.
> >> >
> >> > Could you give some rationale why this is preferred to any other way,
> >> > for instance sub-surface commit needing a main surface commit to apply
> >> > all the pending state?
> >> We don't want to keep another copy of the surface state around and
> >> using dummy surfaces and nesting we can commit a set of surfaces as we
> >> please.
> >
> > Not having to keep another copy of state, yes indeed. Committing a set
> > of surfaces however has some corner cases. How do we avoid committing a
> > sub-surface that is just in the middle of updating its state, when
> > committing the parent? Is it easy enough to avoid in applications?
> We use a dummy root surface, with a number of children surfaces which
> the client can choose commit.

Sorry, I don't understand how this is a solution; this is the
original problem. Continuing on speculation, since we don't have real
examples:

Say, an application gives a sub-surface to a library saying use this
for your overlay stuff. Assuming we can nest sub-surfaces, the library
can go and create sub-surfaces for the given surface. Now, if the
application commits its main surface for the window, or the sub-surface
it gave to the library, it will commit the whole tree of surfaces down
to the sub-surfaces the library created itself. How can we have any
consistency in all these surface's states?

I guess a problem here is that the application should not commit the
library's surfaces to begin with, which is what you suggested the dummy
root surface for, right?

However, the dummy surface as the root surface (i.e. the window main
surface) will not work, because it is the surface the shell will be
managing. Sub-surfaces cannot be assigned a shell surface role. Are you
proposing to change this?

If you are, then the protocol will allow a new class of semantic
errors: assigning shell roles to more than one sub-surface in a window
surface set. I think this change would be a net loss, especially if we
can avoid this altogether with commit semantics.

If instead you would still assing the shell role to the dummy root
surface, we will have problems with mapping, since by definition, a
dummy surface does not have content. We cannot use the first attach as
a map operation, and need more protocol to fix that.

> >> > How would we implement the dummy parent? There is no concept of a dummy
> >> > surface in the protocol yet, and currently mapping a sub-surface
> >> > depends on mapping the immediate parent.
> >> A dummy parent would simply be a surface without content. It would be
> >> mapped (by the shell, it should be left out when rendering). It would
> >> have the size of it's children, or we could add a way to specify the
> >> size of a surface without buffers, which could be shared with a
> >> scaling implementation. I'm not very clear on what sizes are used for
> >> though.
> >
> > Yeah, this would be a major change from the current behaviour in the
> > protocol.
> >
> > Currently, a surface becomes mapped, when a) it has content, and b) the
> > shell agrees, i.e. a proper window type has been set via wl_shell. For
> > sub-surfaces, the condition b) instead requires, that the
> > immediate parent surface is mapped. Therefore if parent gets unmapped,
> > all its direct and indirect sub-surfaces are unmapped, too, so there is
> > an easy way for a client to hide a whole window.
> I though it was required to interact with the shell in order to get a
> visible surface.

Right, we should probably talk about windows here. You don't need shell
interactions to get a cursor surface visible, or a drag icon. You do
need to poke the shell in a desktop environment to get a window
visible. My mistake on terminology.

> > If we allow surfaces to be mapped without content, we need some protocol
> > to map it, either in addition or replacing the trigger "attached a
> > wl_buffer and committed". That might be a new request. Logically, it
> > should also have a counterpart, an unmap request, that works regardless
> > of surface content.
> >
> > Hmm, on another thought, maybe we should just ignore mapped or not for
> > sub-surfaces, and simply go with the main surface, which is managed
> > directly by the shell. Maybe this is what you were after all along?
> Yes, the entire surface group should be mapped/unmapped at once, and
> the shell should only interact with the root surface.

Right, that makes things cleaner.

> > That leaves only the problem of size of a contentless sub-surface.
> > Input region outside of the wl_surface is ignored, so some size is
> > needed to be able to have an input region.
> >
> > Sure, a contentless surface over a compound window would be a handy
> > trick to normalize all input to the compound window into the same
> > coordinate space, but I don't think its convenient enough to warrant
> > the protocol complications. I'm going to implement input event
> > coalescing from sub-surfaces in the toytoolkit, and it doesn't look
> > too hard so far.
> Handling enter/leave mouse request doesn't look very fun. Also it
> wouldn't complicate the protocol very much. The surface's size could
> be inferred from it's children or set explicitly. That probably has to
> be done for surfaces without content and input region too. I'm not
> sure what size is used for besides clip the input region of surfaces.

Yes, there is a race. Any server will likely send the leave and enter
events in one go, but in theory there is a time in between, when the
pointer or keyboard focus is on neither surface, and the application
might render its window as such.

Contentless surface with a non-zero size still feels too strange a
concept, that I'm not yet ready to accept it. We'll see how things
evolve.

> > Actually, since I'm only aiming for the GL or video overlay widget case
> > for starters in toytoolkit, I could simply set the input region to
> > empty on the sub-surface, and so the main surface beneath would get all
> > input.
> That is quite a simple case with one sub-surface :)

Yes, but I have to start from somewhere, and supporting more complex
scenarios within toytoolkit will get out of hand on the amount of work
needed.

If I have time, I might try to create decorations from 4 sub-surfaces,
and see how resizing, input etc. would work, as a non-toytoolkit app.

I wonder if all these difficulties stem from the fact, that we do not
have a core protocol object for a window (i.e. a single target for
input), and are forced to invent elaborate schemes on when a wl_surface
is effectively a window, and when it is just an input/output element.

A crazy thought: if input region was not clipped to the surface size,
the main surface of a window could have an input region covering all
the window's surfaces, and sub-surfaces would never need input. Hrm,
but do sub-surface want input?

Ha, I just realized, if in the application with a library example a
sub-sub-surface had a non-zero input region, then input events for that
surface would be a problem. The wl_surface object would be unknown to
the application, since only the library internally knows about it. The
application could just ignore input for an unknown wl_surface, and the
library create its own input objects, but nothing would tell the
application which *window* the focus is on. Apparently we simply cannot
have a library creating sub-sub-surfaces the application does not know
about, at least not with an input region. Not forgetting, and a)
mistakenly using an unknown wl_surface would be segfault kind of bad,
and b) having to check "is this wl_surface one of which I created"
sucks.

Maybe it would be safe to assume, that libraries must never create
secret input elements, and just ignore this corner case?


Thanks,
pq


More information about the wayland-devel mailing list