Sub-surface protocol

Kristian Høgsberg hoegsberg at gmail.com
Fri Dec 7 06:31:32 PST 2012


On Fri, Dec 07, 2012 at 10:31:20AM +0200, Pekka Paalanen wrote:
> On Wed, 05 Dec 2012 15:43:18 -0200
> Tiago Vignatti <tiago.vignatti at linux.intel.com> wrote:
> 
> > Hi,
> > 
> > On 12/05/2012 12:32 PM, Pekka Paalanen wrote:
> > >
> > > I have not even thought about sub-surfaces' implications to input
> > > handling or the shell yet. Sub-surfaces probably need to be able to
> > > receive input. The shell perhaps needs a bounding box of the set of
> > > surfaces to be able to pick an initial position for the window, etc.
> > 
> > indeed. On my "less intrusive" draft of subsurface, I've first started 
> > brainstorming the input focus behavior [0]. That's quite useful for the 
> > video player example that wants some kind of input control or a dialog 
> > stick window that might not. So we'll need a way to tell which 
> > subsurface gets the input focus. The way I did was enumerating methods 
> > for the subsurface, like "transient" for passing away the focus and 
> > "child" for a regular surface that wants it.. not a nice name, I agree.
> > 
> > That being said, I'm afraid that the input focus together with the 
> > configuration, positing and stacking, belongs more to the shells than 
> > the compositor itself, which should only control the attach -> damage -> 
> > commit. Just the concept of "windows" itself doesn't sound that good for 
> > a simple shell for instance, so at the moment it's being hard for me
> > draw the picture why this is not an extension of the shell_surface. 
> > That's why I started drafting from this other direction.
> 
> Well, my first attack on the input management would be to do nothing
> special. :-)
> 
> I.e. a sub-surface and a parent surface are all just wl_surfaces, and
> would not differ at all from a "window in a single wl_surface" case. On
> the input side, the client would just deal with a set of surfaces
> instead of one surface. It would get leave and enter events as usual,
> when the pointer or kbd focus changes from sub-surface to another. If a
> sub-surface wants or does not want input, the client sets the input
> region accordingly. The client (toolkit) just needs to be prepared to
> combine input events from the sub-surfaces, and e.g. direct keyboard
> events to the "whole window" first, rather than starting with just the
> one wl_surface.
> 
> On the compositor side, I would not have to modify the input code at
> all.
> 
> Weston's core would not need to deal much with "composite windows", i.e.
> a collection of a parent wl_surface and a set of sub-surfaces. It only
> needs to maintain the associations, and maybe offer helpers like the
> bounding box computation. Therefore it should not really need the
> concept of a window. Window management is the shell's job, and for
> instance the force-move binding is a shell feature, which can look up
> and move the whole composite window, when the user tries to move a
> sub-surface.
> 
> So, instead of trying to be smart about input foci between the
> sub-surfaces and the parent, I would just punt that to the clients'
> input handling.

Yup, I fully agree.

Kristian

> I hope that works also in practice.
> 
> > Anyways, nice write-up and summary Pekka. Thanks for bringing this up!
> 
> Thank you!
> 
> > [0] http://cgit.freedesktop.org/~vignatti/wayland/commit/?h=xwm-client-OLD
> >      http://cgit.freedesktop.org/~vignatti/weston/commit/?h=xwm-client-OLD
> 
> I see, right.
> 
> 
> Thanks,
> pq
> _______________________________________________
> wayland-devel mailing list
> wayland-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/wayland-devel


More information about the wayland-devel mailing list