[RFC] libinput configuration interface

Peter Hutterer peter.hutterer at who-t.net
Sun Feb 9 22:57:52 CET 2014

On Sun, Feb 09, 2014 at 01:32:41PM +0100, Eugen Friedrich wrote:
> On 09.02.2014 05:10, Peter Hutterer wrote:
> >On Thu, Feb 06, 2014 at 11:28:49PM +0100, Eugen Friedrich wrote:
> >>Hi together,
> >>i would like to put some input from the embedded/ automotive perspective.
> >>
> >>you can think about huge amount of different configurations for different
> >>device types.
> >>A lot of configuration in the initial post deals with behavior of buttons
> >>and scrolling
> >>areas of the touch panels.
> >>
> >>The good approach could be a kind of general configuration of button and
> >>scrolling areas of the touch panels
> >>the button area could contain a position and dimension of the button in the
> >>device coordinate system and the button code
> >>the slider area could contain a position and dimension of the slider along
> >>with the range.
> >
> >generally for real touch screens (i.e. not touchpads) I think any
> >interpretation of the valthis should IMO
> be handled by the compositorues should be on the client side, not in
> the input
> >library. There just isn't enough context to interpret it otherwise since
> >you're at least partially reliant on UI hints or other elements to make sure
> >you're emulating the right thing.
> Completely agree, active input elements which are drawn by some
> application should be handled by this application.
> >
> >For specialized cases like having a permanent input region that maps into
> >semantic buttons (e.g. the button bar on the Android phones) this should IMO
> >be handled by the compositor.
> Yes this was the aim of my proposal. This would give a flexibility
> to use different touch panel with different screens and put you
> permanent buttons and slider wherever you like. Such cases are maybe
> only important if you are building up a new devices but this
> configuration possibility would add a big value for the libinput.

my main worry here is that the semantics of such buttons are unknown to
anyone but the compositor. libinput has _no_ semantics other than "here's
and area" but especially with direct-touch devices you get more complex
interactions to this. For example, let's say we have a defined button area
at the bottom of the screen:
- should the button trigger if the finger moved from the outside into the
- should the button trigger if the finger left and re-entered the area?
- should the button trigger if the finger left the area?
-- oh, btw, now you need enter/leave events to notify the compositor
- should the button trigger if there is another finger within the area?
- should the button trigger if there was extensive movement between
  press/release in the button area?

All these usually have fairly obvious answers from a UI perspective, but a
low-level library without semantic context would have to provide some matrix
to enable all of them or restrict itself to a set of the above. The latter
wouldn't be a problem, but we'd have to really see some good use-cases to
justify the extra complexity.


More information about the wayland-devel mailing list