[RFC] Sub-surface protocol and implementation v1

Pekka Paalanen ppaalanen at gmail.com
Fri Jan 11 00:52:06 PST 2013


On Thu, 10 Jan 2013 21:54:50 +0100
John Kåre Alsaker <john.kare.alsaker at gmail.com> wrote:

> On Thu, Jan 10, 2013 at 9:49 AM, Pekka Paalanen <ppaalanen at gmail.com> wrote:

> > However, the dummy surface as the root surface (i.e. the window main
> > surface) will not work, because it is the surface the shell will be
> > managing. Sub-surfaces cannot be assigned a shell surface role. Are you
> > proposing to change this?
> >
> > If you are, then the protocol will allow a new class of semantic
> > errors: assigning shell roles to more than one sub-surface in a window
> > surface set. I think this change would be a net loss, especially if we
> > can avoid this altogether with commit semantics.
> >
> > If instead you would still assing the shell role to the dummy root
> > surface, we will have problems with mapping, since by definition, a
> > dummy surface does not have content. We cannot use the first attach as
> > a map operation, and need more protocol to fix that.
> The problem with a surface with no content is that you want to stop
> traversing the surface tree when you spot one, so that all subsurface
> would be hidden? I prefer explicit show/hide requests if you want to
> do that.

Previously we have already chosen to avoid show/hide or map/unmap
explicit requests in the desktop shell, and use attach+commit for it,
determining the operation from the wl_buffer argument to attach. This
is not about sub-surfaces only, but already existing behaviour for all
surfaces, especially top-level window surfaces. I would really like to
not make sub-surfaces different.

I am not talking about a special hide/show for a subset of
sub-surfaces; it should work by the existing conventions. I had not
even thought about hiding a subset, and that indeed raises more
questions:
- are such group operations needed in the first place in the Wayland
  protocol, or can we just punt that to application internals? After
  all, the minimum needed protocol is the ability to apply new state to
  all surfaces of a window atomically, which brings us back to the
  commit behaviour.
- if group hide/show are needed, how do they work?

I'm tempted to say than we don't need them in the Wayland protocol.
Applications need to co-operate with libraries in any case to maintain
consistency. I just wonder how much we can punt to apps, if not
everything.

Coming back to the original topic, if we had a dummy surface as the
root object of a window, we would currently have no way to map the
window.

> The problem with a surface with a input region and no content is that
> a infinite input region is set by default, so it needs to be clipped
> to something to count as a real surface.

Yeah, that's one problem.

> >> > Actually, since I'm only aiming for the GL or video overlay widget case
> >> > for starters in toytoolkit, I could simply set the input region to
> >> > empty on the sub-surface, and so the main surface beneath would get all
> >> > input.
> >> That is quite a simple case with one sub-surface :)
> >
> > Yes, but I have to start from somewhere, and supporting more complex
> > scenarios within toytoolkit will get out of hand on the amount of work
> > needed.
> >
> > If I have time, I might try to create decorations from 4 sub-surfaces,
> > and see how resizing, input etc. would work, as a non-toytoolkit app.
> >
> > I wonder if all these difficulties stem from the fact, that we do not
> > have a core protocol object for a window (i.e. a single target for
> > input), and are forced to invent elaborate schemes on when a wl_surface
> > is effectively a window, and when it is just an input/output element.
> >
> > A crazy thought: if input region was not clipped to the surface size,
> > the main surface of a window could have an input region covering all
> > the window's surfaces, and sub-surfaces would never need input. Hrm,
> > but do sub-surface want input?
> Not that crazy, one of my first suggestion was a wl_surface with
> multiple wl_buffers and no interactions with input. We should probably
> find some real examples where applications want input into
> sub-surfaces.

Right, we would just need a bit different object model to make it nice.

The 4 sub-surface decorations experiment will be interesting, indeed.
There sub-surfaces do need input, since the do not overlap any other
surface of the same window.

> > Ha, I just realized, if in the application with a library example a
> > sub-sub-surface had a non-zero input region, then input events for that
> > surface would be a problem. The wl_surface object would be unknown to
> > the application, since only the library internally knows about it. The
> > application could just ignore input for an unknown wl_surface, and the
> > library create its own input objects, but nothing would tell the
> > application which *window* the focus is on. Apparently we simply cannot
> > have a library creating sub-sub-surfaces the application does not know
> > about, at least not with an input region. Not forgetting, and a)
> > mistakenly using an unknown wl_surface would be segfault kind of bad,
> > and b) having to check "is this wl_surface one of which I created"
> > sucks.
> >
> > Maybe it would be safe to assume, that libraries must never create
> > secret input elements, and just ignore this corner case?
> You mean that the application wouldn't be aware that it's window still
> has focus? We could make a focus_child event and never send the focus
> leave event when focusing a child, but that starts to be rather fun.

Yeah. Ew. :-P
Not a problem worth solving at this level, IMO.

> I'm thinking we probably want a special client API between
> applications and libraries which are rendering with an independent
> framerate. This is so the application can still be in control of
> presenting. A way to avoid this could be a copy state request which
> copies one surface state from another. That solution also avoids
> modifying the EGL interface. This has to be done to avoid the
> application racing with the library when modifying the surface.

Is this essentially any different to the commit behaviour krh
suggested, IIRC, that child surface commits just cache the state which
will be applied on the parent surface commit?

Ok, so the application could choose whether it gives a "real" or a
"hidden" sub-surface to the library, depending on which behaviour it
wants? But if we want to make the behaviour choosable, would it not be
better to add a request directly setting the behaviour of sub-surface
commits? That way it could be changed on demand, like for resizing.

Maybe there is no one commit behaviour that fits all sub-surface use
cases, and it should be changeable. Everything else so far has felt
more awkward.

We will be needing a test program for the autonomous sub-surface case,
too, before this is solved. E.g. something that mimics an independent
video output component.


Thanks,
pq


More information about the wayland-devel mailing list