[PATCH libinput 01/11] Add an API for touchpad gesture events

Peter Hutterer peter.hutterer at who-t.net
Fri Feb 20 19:39:47 PST 2015

On 21/02/2015 06:08 , Bill Spitzak wrote:
> On 02/19/2015 04:49 PM, Peter Hutterer wrote:
>> unless you have the context you cannot know. and the only thing to have
>> that context is the client. sure you can make all sorts of exceptions
>> ("but
>> double-tap should always be doubletap") but that just changes the caliber
>> you're going to shoot yourself in the foot with.
> Okay, but I am still not seeing a clear answer to "why are touch pads
> and touch screens different?" Perhaps the client has to do gestures, but
> it seems to me that the answer is going to be the same whether the
> controlling surface is on the screen surface or not.
> The fact that you can point to something on the screen does not seem
> enough of a reason. All touch pads I have ever seen have a cursor on the
> screen and when the user presses down they expect to be pressing where
> that cursor is, thus providing exactly the same level of "context" as a
> touch screen.

unlike on a touchscreen you don't select on-screen objects on the
touchpad. if you pinch on a touchpad you do the gesture, at the location
of the cursor (if appropriate) but the gesture itself is not ambiguous.
you don't need to translate finger locations into screen locations to
check what widgets are underneath.

well, technically you *could* do that, but we won't because IMO it
doesn't make sense.

>>> That was probably the wrong term. I don't mean raw data from a
>>> device, what
>>> I meant was that the client gets the exact same events during a
>>> gesture that
>>> it would get if the gesture was not recognied, such as press/move/raise
>>> type events. It is true they would be deferred until the gesture is
>>> recognized, and the compositor will deliver them to the client that
>>> gets the
>>> gesture, so they are not 100% identical, but they will be useful if new
>>> gestures are added so older clients still work.
>> no, we tried that in X and it's an unmaintainable mess that has corner
>> cases
>> that are most likely unsolvable. sending two event streams for the same
>> event is a nightmare.
> But in Wayland you don't have to keep back compatibility. The events
> would be clearly indicated as belonging together so there is no more
> ambiguity about which ones are the same one.

we already have backwards compatibility requirements. we can't break
wl_pointer and adding to it is hard.

> In addition there would be events like "this is part of a foo-gesture
> but should be ignored if you are interpreting the foo-gesture".

Again, similar to what we have in X, it's unmaintainable. But feel free
to come up with a protocol spec that works and can be implemented. Happy
to be proven wrong.


More information about the wayland-devel mailing list