[PATCH libinput 01/11] Add an API for touchpad gesture events

Peter Hutterer peter.hutterer at who-t.net
Thu Feb 19 00:22:09 PST 2015


On Wed, Feb 18, 2015 at 11:36:18AM -0800, Bill Spitzak wrote:
> 
> 
> On 02/18/2015 04:26 AM, Hans de Goede wrote:
> >For touchscreens we always send raw touch events to the compositor, and the
> >compositor or application toolkits do gesture recognition. This makes sense
> >because on a touchscreen which window / widget the touches are over is
> >important context to know to interpret gestures.
> >
> >On touchpads however we never send raw events since a touchpad is an absolute
> >device which primary function is to send pointer motion delta-s, so we always
> >need to do processing (and a lot of it) on the raw events.
> 
> I don't quite understand this distinction. I can certainly imagine touchpad
> gestures that depend on where the cursor is currently.

direct-touch vs indirect-touch, the two interaction methods are different.

> Conversely I can imagine users wanting full-screen gestures on their
> touchscreen, for instance a fast swipe should not be sent to the widget the
> down event is at, but to where the cursor was before the swipe was started.

libinput interprets the gesture, the compositor decides whether to send it
to a client (and which client to send it to).
 
> I think what is needed is for gesture recognition to be in libinput but also
> send the raw events. 

raw events (I'm going to assume you mean touch events) have no meaning if
you don't know *a lot* about the device. which libinput does, nothing else
in the stack does (and we're trying to hide most of it). for example, if you
have a recent lenovo laptop, we can do (some) three-finger gestures even
though the touchpad only supports two-fingers.

> Then a program can ignore the gesture if it wants, and
> this also allows new gestures to be added without making older programs
> freeze when you attempt them. (for an implementation I would add a field to
> events to indicate if they are the last in a set).

sending raw events through to clients so they can interpret means
duplication, bugs, reduced consistency across applications, reduced
stability and less predictability.

we've had ubiquitous gestures for years now and we've learned one thing:
there is no use-case for gestures beyond the few that we'll provide anyway
(tap, swipe, pinch, rotate). providing the framework to support more than
that is pointless. And we're trying to provide a useful input stack, not one
that requires you to choose which way your touchpad won't work.
 
> The only effect of gesture recognition that is unavoidable is that the focus
> may not match the location of the cursor, in that the raw events are sent to
> the same target that the gesture is being sent to. I think this is a
> necessary ability to make the gestures consistent.

libinput is not in the business of focus handling, that's the compositor.

Cheers,
   Peter


More information about the wayland-devel mailing list