Weston multitouch support?
Shawn Rutledge
shawn.t.rutledge at gmail.com
Tue Jun 3 05:13:47 PDT 2014
On 3 June 2014 13:16, Peter Hutterer <peter.hutterer at who-t.net> wrote:
> On 3/06/2014 20:25 , Shawn Rutledge wrote:
...
> the synaptics driver does support multitouch and gives you the same type of
> events as any MT device will (if you disable the in-driver gestures). It has
> done so for about 2 years now, no-one ever cared enough about it to
> implement the client stack so this could actually work.
But is disabling in-driver gestures a global thing or can it be done
only for specific windows? (Even doing it per-window is not quite an
ideal solution but could work some of the time)
> Here's the thing
> about the X protocol: it's not this magical self-aware thing, it's written
> by people. If no-one works on it, it doesn't change, which is pretty much
> why it updates so slowly.
>
> So here's a request: write down what exactly you need, what the use-cases
> are, how you want it to behave, etc. That way we can actually implement
> something useful. It's not that we're not listening, it's more that no-one
> is talking until it's too late.
OK I can try. In what form and forum would be most helpful?
>> Flicking is a weird case because Qt Quick does its own physics: the
>> flicking continues after you release your finger, and there is the
>> bounce-back at the end. On Apple platforms the QtQuick behavior
>> doesn't match the native one, so there are discussions about how to
>> fix that. Are you thinking that on wayland the flicking should be
>> driven by extra events beyond the actual finger release, which keep
>> driving the UI to the end and then sending reversed events to generate
>> the bounce-back? I think the main reason for having a flick gesture
>> at all is to enable flicking in legacy applications which were
>> designed to handle mouse wheel. The trouble is that there then has to
>> be a mechanism to tell it where the "end" is, for non-legacy
>> applications which actually want to have the "bounce" or some other
>> end-of-flick behavior. IMO that's an unfortunate break in
>> encapsulation; but if the applications alternatively do their own
>> flick physics, they are free to do it differently and inconsistently.
>> Same thing with other gestures. It would be nice to put the gesture
>> and related behavioral stuff into a library, so that it's modular and
>> optional and can be replaced with an alternate one, and yet if the
>> same library is used everywhere, then it's consistent. Putting this
>> stuff at too low a level (like inside the synaptics driver) tends to
>> mean that the gestures will be a fixed set, whereas it would be nice
>> to be able to invent new ones.
>
>
> .... and you've just arrived at your favourite holiday destination. on your
> left you can see the rock ("I can't change anything!"), on your right the
> hard place ("Everyone does it differently and nothing behaves the same!").
> The cooking class starts at 5 and we've got shuffleboard on the top deck.
But I think a suitable degree of modularity might solve it. It seems
in the wayland spirit, just like the debate about window decorations:
if you want common ones, use a shared library. If you want to
decorate your own window, that's easy too. As long as most
applications agree to use the same shared library with the same theme,
unless they have a real reason not to, then the whole desktop
experience will end up being just as consistent as in X11 when the
window manager decorates all the windows the same, but with the
advantage that some of the X11 mess goes away.
But maybe you are going to say libinput is that library. If the
architecture is that you can have multiple compositors and each one
can use a different modified version of libinput, that sounds kindof
hackable, but it still might end up mingling device handling and
gesture recognition and the related physics a bit too much.
More information about the wayland-devel
mailing list