Multitouch followup: gesture recognition?

Florian Echtler floe at
Mon Mar 29 01:08:13 PDT 2010

Hi again, I've been reading some background stuff on the X architecture,
so maybe I now have a better understanding of what we are actually
talking about:

> > On one hand, I agree. But I believe that this problem is exactly what my
> > formalism solves. By a) allowing applications to customize gestures and
> > b) restricting them to certain screen regions (*), this isn't a
> > contradiction anymore. E.g. what's a zoom gesture to the first app on
> Which of course means the extension needs to transport the necessary
> information, or describe other means of transport (e.g. the props I
> mentioned, though as Peter pointed out, they're not good for live
> communication).
> This seems essential to your approach, so the feasibility of a server
> extension (oranything else, but a extension incurs overhead) depends a
> fair bit on the dynamics of your gesture customization.
Just specifying what gestures a specific window would be interested in
wouldn't usually be "live", would it? That's something defined at
creation time and maybe changed occasionally over the lifetime, but not

> > (*) whether these should be XWindows or something different, I don't
> > know yet. Is there an extension that allows arbitrary shapes for 
> > XWindow objects, particularly with respect to input capture?
> the Shape extension does that.
> Yes, except you don't want to use it, because it would throw in far
> too many round trips, some of which might be delayed for ridiculously
> long periods of time, even while new events continue to be delivered.
> So, you basically want to keep it all in-client unless it's absolutely
> strictly necessary to do otherwise.
Hm, this might be a problem in the long run. The classical tricks for
dealing with non-rectangular UI objects (capturing and bubbling)
probably are not going to work here, as gestures can't easily be split
back into discrete input events.

> > Let me try to summarize the possible approaches:
> > 
> > 1. a pure userspace library which just converts gestures locally for one
> > specific client without knowledge of others (this is more or less what
> > Qt or libTISCH do right now, though in different ways)
> > 
> > 2. a special client like a WM which intercepts input events and passes
> > additional gesture events to other clients (possible, but with some
> > caveats I haven't yet understood fully)
> > 
> > 3. a separate server extension in its own right (possible, also with
> > some potential traps)
> > 
> > 4. a patch to libXi using the X Generic Event Extension (same as 3, but
> > fastest to hack together and doesn't require any changes to the server.)
> > 
> > Would you agree with that summary?
> I don't get what 4 might be. 2 and 3 aren't really alternatives, but
> different aspects of a server-side implementation. The special client
> can make the server deliver events, it can't do that itself. So it needs
> an appropriate own server extension or additional Xinput requests to
> facilitate delivery. Which you want libXi wrappers for.
> So whether a special client detects gestures or the server itself, the
> server needs to deliver events, and the client needs to be able to
> receive them. This is where XGE and libXi kick in.
Okay, it seems I'm slowly getting it. Please have a look at the attached
PDF - this should illustrate the combination of 2/4, correct? (the
normal Xinput events should probably be duplicated to the clients in the
classical manner).

Yours, Florian
0666 - Filemode of the Beast
-------------- next part --------------
A non-text attachment was scrubbed...
Name: variant2-4.pdf
Type: application/pdf
Size: 9538 bytes
Desc: not available
URL: <>

More information about the xorg-devel mailing list