EFL/Wayland and xdg-shell
derekf at osg.samsung.com
Mon Apr 13 19:59:23 PDT 2015
On 13/04/15 07:24 PM, Jasper St. Pierre wrote:
> On Mon, Apr 13, 2015 at 5:02 PM, Bryce Harrington <bryce at osg.samsung.com> wrote:
>> A couple weeks ago I gave a talk on Wayland to the EFL folks at the
>> Enlightenment Developer Day in San Jose. They've already implemented a
>> Wayland compositor backend, so my talk mainly speculated on Wayland's
>> future and on collecting feature requests and feedback from the EFL
>> developers. Below is my summary of some of this feedback; hopefully if
>> I've mischaracterized anything they can jump in and correct me.
>> For the presentation, I identified Wayland-bound bits currently
>> gestating in Weston, listed features currently under review in
>> patchwork, itemized feature requests in our bug tracker, and summarized
>> wishlists from other desktop environments. Emphasis was given to
>> xdg-shell and the need for EFL's feedback and input to help ensure
>> Wayland's first revision of it truly is agreed-on cross-desktop.
>> Considering KDE's feedback, a common theme was that xdg-shell should not
>> get in the way of allowing for advanced configuration, since
>> configurability is one of that D-E's key attributes. The theme I picked
>> up talking with EFL emphasized being able to provide extreme UI
>> features. For them, the important thing was that the protocol should
>> not impose constraints or assumptions that would limit this.
>> For purposes of discussion, an example might be rotated windows. The
>> set geometry api takes x, y, height, and width. How would you specify
>> rotation angle?
> I'm confused by this. set_window_geometry takes an x, y, width and
> height in surface-local coordinates, describing the parts of the
> surface that are the user-understood effective bounds of the surface.
> In a client-decorated world, I might have shadows that extend outside
> of my surface, or areas outside of the solid frame for resizing
> purposes, but these should not be counted as part of the window for
> edge resistance, maximization, tiling, etc.
> So, if I have a surface that is 120x120 big, and 20px of it are
> shadows or resize borders, then the client should call
> set_window_geometry(20, 20, 100, 100)
> Additionally, the size in the configure event is in window geometry
> coordinates, meaning that window borders are excluded. So, if I
> maximize the window and get configure(200, 200), then I am free to
> attach a 300x300 surface as long as I call set_window_geometry(50, 50,
> 200, 200).
> If the compositor rotates the window, the window geometry remains the
> same, but the compositor has the responsibility of properly rotating
> the window geometry rectangle for the purposes of e.g. edge
> So, if I have a window with a window geometry of 0,0,100,100, and the
> user rotates it by 45 degrees, then the effective window geometry the
> compositor snaps to -42,-42,142,142. Correct me if my high school trig
> is wrong :)
> The window is never aware of its local rotation transformation.
> Does that make sense? Is this not explained correctly?
That all makes sense - set_window_geometry() was a bit of a red herring
Some EFL developers want the application to have a way to know its
rotation so it can, for example, render drop shadows correctly.
Take a window in weston and rotate it 90 or 180 degrees and the drop
shadow is "wrong".
Drop shadows are just the easy example, there's also a desire to have
the applications react to the rotation in some way so the usual EFL
bling can take place during a a smooth rotation from 0-90 degrees, for
I do wonder if drop shadows really should be the client's responsibility
at all. If completely non-interactive eye-candy was left to the
compositor, would we still need set_window_geometry() at all?
>> While window rotation was used more as an example of how built-in
>> assumptions in the API could unintentionally constrain D-E's, than as a
>> seriously needed feature, they did describe a number of ideas for rather
>> elaborate window behaviors:
>> * Rotation animations with frame updates to allow widget re-layouts
>> while the window is rotating.
> Why does xdg-shell / Wayland impede this?
It's not directly impeded I suppose - it's just not currently possible?
The client doesn't know it's being rotated (at least when talking about
arbitrary rotations, as opposed to the right angle transforms)
>> * Arbitrary shaped (non-rectangular) windows. Dynamically shaped
> I don't understand this. Any shape always has an axis-aligned bounding
> box. Using ARGB8888, you can craft windows of any shape.
>> * Non-linear surface movement/resizing animations and transition
> Why does xdg-shell / Wayland impede this?
>> There was lots of interest in hearing more about Wayland's plans for
>> text-cursor-position and input-method, which are necessary for Asian
>> languages. A particular question was how clients could coordinate with
>> the virtual keyboard input window so that it doesn't overlay where text
>> is being inserted. Security is also a top concern here, to ensure
>> unauthorized clients can't steal keyboard input if (when) the virtual
>> keyboard client crashes.
> The solution GNOME takes, which is admittedly maybe too unrealistic,
> is that IBus is our input method framework, and thus our compositor
> has somewhat tight integration with IBus. I don't think input methods
> need to be part of the core Wayland protocol.
That may be in line with the current thinking in the EFL camp.
Does that mean the input-method and text protocol files in weston are of
no use at all to gnome?
>> Regarding splitting out libweston, they suggested looking if a
>> finer-grained split could be done. For example, they would be
>> interested in utilizing a common monitor configuration codebase rather
>> than maintaining their own. OTOH, I suspect we can do a better
>> (simpler) API than RANDR, that doesn't expose quite so many moving parts
>> to the D-E's, which may better address the crux of the problem here...
>> One area we could improve on X for output configuration is in how
>> displays are selected for a given application's surface. A suggestion
>> was type descriptors for outputs, such as "laptop display",
>> "television", "projector", etc. so that surfaces could express an output
>> type affinity. Then a movie application could request its full screen
>> playback surface be preferentially placed on a TV-type output, while
>> configuration tools would request being shown on a laptop-screen-type
>> For screensaver inhibition, it was suggested that this be tracked
>> per-surface, so that when the surface terminates the inhibition is
>> removed (this is essentially what xdg-screensaver tries to do, although
>> is specific to the client process rather than window iirc).
> It's similar to
> It would require careful semantics that I'm not so sure about. Why is
> it being tied to the surface rather than the process important?
It's an idea that's being thrown around. "We" would like to have the
ability to put outputs to sleep independently... If the surface is
dragged from one output to another the blanker inhibition would move
with it without the client having to do anything.
>> They also
>> suggested per-output blanking support so e.g. laptop lvds could be
>> blanked but the projector be inhibited, or when watching a movie on
>> dual-head, hyst the non-movie head powersaves off. They also suggested
>> having a 'dim' functionality which would put the display to maximum
>> dimness rather than blanking it completely; I'm not sure on the use case
>> here or how easy it'd be to implement.
> This is stuff I would hope would be provided and implemented by the
> DE. As a multimonitor user who quite often watches things on one
> monitor while coding on another, I'd turn this feature off.
As would I. :)
>> I had hoped to discuss collaboration on testing, but without specifics
>> there didn't seem to be strong interest. One question was about
>> collecting protocol dumps for doing stability testing or performance
>> comparison/optimization with; while we're not doing that currently, that
>> sounded straightforward and like it could be useful to investigate more.
>> There was some confusion over what the purpose of xdg-shell really is;
>> it was looked at as a reference implementation rather than as a
>> lowest-common denominator that should be built *onto*. So it seems that
>> Wayland have some messaging to do to ensure xdg-shell really is
> xdg-shell should be seen as the proper shell protocol everybody should
> strive to support. It's entirely possible that DEs have their own
> protocols they put on top: GTK+ has gtk-shell, for instance.
Is gtk-shell intended to be a test bed for things that will eventually
be in xdg-shell? Or are divergent standards a guarantee at this point?
More information about the wayland-devel