Multitouch followup: gesture recognition?

Simon Thum simon.thum at gmx.de
Mon Mar 22 03:03:22 PDT 2010


Am 21.03.2010 10:37, schrieb Florian Echtler:
> Now, in an Xorg context, I'd very much like to hear your opinions on
> these concepts. Would it make sense to build this into an X helper
> library?
I'm a bit worried that discussion is inclined towards a library, as
opposed to a composite-inspired special client. The concepts themselves
sound fine.

A library can be done 'right now', since apps are free to do so. It has
the advantage of a close connection to the consuming app, but also the
associated disadvantages.

In particular, how to cope with global gestures, e.g. switching an app
or backgrounding it? Apparently, such things should be consistent. I
imagine a desktop environment might want to put up such a special
client, like they have preference for their WM.

For this reason, the library approach seems unfit to me, at least
mid-term. Except for me there was only one voice for the special client,
which may be related to the almost wire-level issues discussed so far.
Anyway, here's how I envision it to work:

§ A new 'gesture' event gets created, like:
 * current timestamp
 * timestamp of gesture starting
 * an atom for the gesture type (X.org should keep a list)
 * gesture {start|ongoing|ending}
 * {whatever is generic enough}
 * {Lots'o'data, gesture-dependent}

A prosaic example would be an app learning that there's a
"DIRECTED_DRAGGING gesture going on, starting at (200, 100)@70 degrees,
now being at (300, 100)@95 deg" and use this information to navigate
within a 3D-view. Also note the omission of (x,y) from the general
gesture event, since I'd deem it specific. Other gestures may not have a
primary x,y.

§ A special gesture client (composite-like)

This client might receive events as discussed - but all of them - by
virtue of registering with the server. It analyzes the stream, and
whenever it thinks something important happened, it tells the server.
The server then dispatches a corresponding gesture event, according to
its state and some constraints given by the special client (e.g.
Florian's event regions, gesture-specific delivery constraints, ...)
which may not be part of the event as delivered.

The important point here is that gesture events are asynchronous, so
there's no need to wait inside the event loop. Gestures correlate to,
but don't strictly depend on other input events. Their timestamps may
not be in order for this reason.

Such an approach could be retrofitted, and would work together with
library-based designs due to different events. It would also work fine
when more basic things (e.g. contact tracking) happen earlier.

I never fully worked this out, so I can't offer a fancy paper, but it
seems sensible to me. And since anyone's free to do a library, when it
comes down to something X.org-endorsed, a special client infrastructure
might be a preferable choice.

Cheers,

Simon


More information about the xorg-devel mailing list