EFL/Wayland and xdg-shell
Carsten Haitzler (The Rasterman)
raster at rasterman.com
Thu Apr 16 17:37:55 PDT 2015
On Thu, 16 Apr 2015 15:32:31 +0100 Daniel Stone <daniel at fooishbar.org> said:
> On 15 April 2015 at 23:51, Carsten Haitzler <raster at rasterman.com> wrote:
> > On Wed, 15 Apr 2015 20:29:32 +0100 Daniel Stone <daniel at fooishbar.org> said:
> >> On 15 April 2015 at 02:39, Carsten Haitzler <raster at rasterman.com> wrote:
> >> > not esoteric - an actual request from people making products.
> >> The reason I took that as 'esoteric' was that I assumed it was about
> >> free window rotation inside Weston: a feature which is absolutely
> >> pointless but as a proof-of-concept for a hidden global co-ordinate
> >> space. Makes a lot more sense for whole-display rotation. More below.
> > not just whole display - but now imagine a table with a screen and touch
> > and 4 people around it one along each side and multiple windows floating
> > about like scraps of paper... just an illustration where you'd want
> > window-by-window rotation done by compositor as well.
> Sure, but that's complex enough - and difficult enough to reason even
> about the desired UI semantics - that it really wants a prototype
> first, or even a mockup. How do you define orientation in a table
> scenario? If you're doing gesture-based/constant rotation (rather than
> quantised to 90°), how do you animate that, and where does the
> threshold for relayout lie? Without knowing what to design for, it's
> hard to come up with a protocol which makes sense.
sure - let's leave this until later.
> Luckily, writing extensions is infinitely less difficult than under
> X11, so the general approach has been to farm these out to separate
> extensions and then bring them in later if they turn out to make sense
> in a global context. The most relevant counter-example (and
> anti-pattern) I can think of is XI2 multitouch, where changing things
> was so difficult that we had to design from the moon from the get-go.
> The result, well, was XI2 multitouch. Not my finest moment, but
> luckily I stepped out before it was merged so can just blame Peter.
well on 2 levels. extensions were basically almost a no-go area. but with x
client message events, properties you could "extend" x (well wm/comp/clients <->
clients) fairly trivially. :) at this stage wayland is missing such a "trivial"
ad-hoc extending api. i actually might add one just for the purposes of
prototyping ideas - like just shovel some strings/ints etc. with some string
message name/id. anyway...
> >> > actually the other way around... clients know where the vkbd region(s)
> >> > are so client can shuffle content to be visible. :)
> >> In a VKB (rather than overlay-helper, as used for complex composition)
> >> scenario, I would expect xdg-shell to send a configure event to resize
> >> the window and allow for the VKB. If this isn't sufficient - I
> >> honestly don't know what the behaviour is under X11 - then a potential
> >> version bump of wl_text could provide for this.
> > no - resizing is a poorer solution. tried that in x11. first obvious port of
> > call. imagine vkbd is partly translucent... you want it to still be over
> > window content. imagine a kbd split onto left and right halves, one in the
> > middle of the left and right edges of the screen (because screen is
> > bigger). :)
> Yeah, gotcha; it does fall apart after more than about a minute's
> thought. Seems like this has been picked up though, so happy days.
> >> > (pretend a phone with 4 external monitors attached).
> >> Hell of a phone. More seriously, yes, a display-management API could
> >> expose this, however if the aim is for clients to communicate intent
> >> ('this is a presentation') rather than for compositors to communicate
> >> situation ('this is one of the external monitors'), then we probably
> >> don't need this. wl_output already provides the relative geometry, so
> >> all that is required for this is a way to communicate output type.
> > i was thinking a simplified geometry. then again client toolkits can figure
> > that out and present a simplified enum or what not to the app too. but yes -
> > some enumerated "type" attached to the output would be very nice. smarter
> > clients can decide their intent based on what is listed as available -
> > adapt to the situation. dumber ones will just ask for a fixed type and
> > "deal with it" if they don't get it.
> I think exposing an output type would be relatively uncontroversial.
> The fullscreen request already takes a target output; would that cover
> your uses, or do you really need to request initial presentation of
> non-fullscreen windows on particular outputs? (Actually, I can see
> that: you'd want your PDF viewer's primary view to tack to your
> internal output, and its presentation view aimed at the external
> output. Jasper/Manuel - any thoughts?)
yes - any surface, anywhere, any time. :)
> >> > surfaces should be
> >> > able to hint at usage - eg "i want to be on the biggest tv". "i want to
> >> > be wherever you have a small mobile touch screen" etc. compositor deals
> >> > with deciding where they would go based on the current state of the world
> >> > screen-wise and app hints.
> >> Right. So if we do have this client-intent-led interface (which would
> >> definitely be the most Wayland-y approach), then we don't need to
> >> advertise output types and wl_output already deals with the rest, so
> >> no change required here?
> > well the problem here is the client is not aware of the current situation.
> > is that output on the right a tv on the other side of the room, ore a
> > projector, or perhaps an internal lcd panel? is it far from the user or
> > touchable (touch surface). if it's touchable the app may alter ui (make
> > buttons bigger - remove scrollbars to go into a touch ui mode as opposed ro
> > mouse driven...). maybe app is written for multitouch controls specifically
> > and thus a display far from the user with a single mouse only will make the
> > app "useless"? app should be able to know what TYPE of display it is on -
> > what types are around and be able to ask for a type (may or may not get
> > it). important thing is introducing the concept of a type and attaching it
> > to outputs (and hints on surfaces).
> Touchable is something to think about for sure, but right now we
> really don't have a way of exposing the compositor's internal mapping
> of input device co-ordinate space to global/output/surface co-ordinate
> space. This is something that needs solving for tablets anyway (does
> it span the whole desktop as a faux-relative device, or have you bound
> it 1:1 to a particular output/surface as an absolute device?), so is
> definitely not going to drop off the radar.
oh indeed. for now i'd assume the compositor has to "deal with this" and map
coords to surfaces and if the touch area is over 1 screen only ((e.g. internal)
- the external screen has none - this would be a perfect candidate for screen
types being such meta-info carriers. if we standardize some like:
mobile == small screen phone/phablet
tablet == larger screen mobile devices
desktop == good ye-olde worlde desktop monitors (many sizes)
desktop-touch - desktop monintors with added touch abilities
laptop == good ye-olde worlde laptop screens
laptop-touch == newer laptop screens with added touch ability
laptop-hybrid == the newer transformable laptop screen that can go from tablet
to laptop and have touch
it's almost like we would need some enum space for type, and some more bits for
a bitmask of flags. touch would be a flag. hybrid would be one too. in fact for
the hybrid transformable screens... type could change from tablet to laptop any
time at runtime, so we need events to say this (the hybrid - or maybe
"transformable" flag tells you to expect this screen to change). :)
so as i see it maybe something like a uint32 for screen type with 8 bits for
type, and the other 24 for flags? (seriously.. we'll need more than 256 screen
types... ever? i think not. we'd be lucky to break 50, and if we have so many
we are likely misusing types and should be using flags). so you'd have
MOBILE | TOUCH
LAPTOP | TOUCH | TRANSFORMABLE
DESKTOP | TOUCH
> >> - client informs compositor it is capable of managing the rotation
> >> process
> >> - compositor triggers beginning of rotation and informs client
> >> - client renders a series of frames animating between initial and final
> >> states
> >> - last frame is produced twice: one rotated in-client (i.e. output
> >> displaying in non-rotated co-ordinate space will show the buffer as
> >> rotated), one in newly-rotated co-ordinate space
> > ahh for the fullscreen case :) i'm thinking multiple windows at once - so
> > compositor would rotate the surface output at composite time given an angle
> > the client gives it. client would resize buffer each time with the "anchor"
> > being at window center (need to define that actually somehow). client now
> > just loops from 0 to 90 degrees, with a newly sized buffer each time and
> > newly rendered content. you're definitely thinking the fullscreen case
> > subset here where you are indeed right. :)
> If that's not synchronised, it'll look like crap. I think the most
> productive thing would be to knock together a closed system (in the
> engineering, rather than freeness, sense) which implements this well,
> and see what we can pull out of that generically.
oh i'm thinking the client drives it. so it's synchronised as client will
re-render a newly sized buffer every frame AND give over the angle to rotate
with it. :)
> >> - client attaches and commits pre-rotated frame (the first)
> >> - client immediately attaches and commits rotated-coord-buffer (the
> >> second), tagged with information that this is the final frame and the
> >> compositor should atomically switch its display along with displaying
> >> this buffer
> >> This is relatively complex, and introduces a whole lot of failure
> >> modes: having a 'pending rotation' will be tricky to get exactly right
> >> without proper testcases, and we're also relying on clients to not
> >> hold our entire display pipeline hostage, so we'd need some kind of
> >> arbitrary timeout too I guess. It is entirely doable, but the
> >> complexity makes me think that this would best be proven either as an
> >> external extension, or something that was prepared to live out-of-tree
> >> for a little while whilst it was proven.
> > oh sure. we'd end up doing this kind of thing regardless - in xdg and
> > wayland core or not. if not - we'd do an extn and have it all working etc.
> > - then maybe come back for "hey - this stuff works - see? how about we
> > standardize and all agree on the same core protocol for this now?"
> Yes! This is exactly how the system is supposed to work. :)
> Sounds like we're pretty much on the same page, and do have a couple
> of concrete points picked out. Progress!
sure. i expect the coming few years to be a bit chaotic and messy as dust
settles and everyone figures out how to work within a wayland world, and has to
fill gaps that had been filled before and perhaps the original teams that
solved these problems in x11 are long gone/moved on, but once this settles a
bit and we've had our fracases... things will be streamlined and well known. it
just will take time.
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) raster at rasterman.com
More information about the wayland-devel