Enabling multitouch in input-evdev

Peter Hutterer peter.hutterer at who-t.net
Sun Jan 17 19:33:36 PST 2010

On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
> On 01/12/2010 12:03 PM, ext Peter Hutterer wrote:
> >>So, first question: is my behavior the good one? (not being
> >>compliant with Windows or MacOS)
> >
> >Short answer - no. Long answer - sort-of.
> >
> >Multitouch in X is currently limited by the lack of multitouch events in the
> >protocol. What you put into evdev is a way around it to get multitouch-like
> >features through a multipointer system. As Bradley said, it is likely better
> >for the client-side to include the lot in a single event.  Since X
> >essentially exists to make GUI applications easier (this may come as a
> >surprise to many), I'd go with his stance.
> >
> >However, this is the harder bit and would require changing the driver, parts
> >of the X servers's input system, the protocol and the libraries. It'd be
> >about as wide-reaching as MPX though I hope that there is significantly less
> >rework needed in the input subsystem now.
> Why do you think it would require protocol changes? For the new
> event type? If I understand it correctly, events for devices can
> contain any number of valuators... is it possible to have x1,y1
> x2,y2 x3,y3 and so-on?

correct, there's a valuator limit of 36 but even that should be fine for a
single device. with axis labelling it's now even possible to simply claim:
hey, here's 24 axes but they represent 12 different touchpoints.
I hadn't really thought about this approach yet because IMO touch is more
than just simply pairs of coordinates and that's what I'd eventually like to
get in. As an intermediate option your approach would definitely work, it'd
be easy to integrate and should hold up with the current system.

bonus point is also that the core emulation automatically works on the first
touchpoint only, without any extra help.

And whether more is needed (i.e. complex touch events) is something that can
be decided later when we have a bit more experience on what apps may need.
Stephane, do you have opinions on this?

> From what I can tell, there are a number of challenges that would
> require driver and server changes. In particular, being able to do
> collaborative work on a large multi-touch surface requires the
> driver (or server, not sure which makes most sense) to be able to
> somehow split the touch points between multiple windows. This is
> something that we had to do in Qt at least.

packing all coordinates into one event essentially makes it a single
point with auxiliary touchpoints. This is useful for a number of things
(most gestures that we see today like pinch and rotate should work in this
approach) but not useful once you have truly independent data from multiple
hands or users. that's when you have to do more flexible picking in the
server and that requires more work.

given the easy case of a single user interacting with a surface:
with Qt as it is now, if you get all the coordinates in a single XI2 event,
would that work for you? or is there extra information you need?


More information about the xorg-devel mailing list