multitouch

Bradley T. Hughes bradley.hughes at nokia.com
Mon Feb 8 01:15:49 PST 2010


On 02/08/2010 07:16 AM, ext Peter Hutterer wrote:
> The basic principle for the master/slave division is that even in the
> presence of multiple physical devices, what really counts in the GUI is the
> virtual input points. This used to be a cursor, now it can be multiple
> cursors and with multitouch it will be similar. Most multitouch gestures
> still have a single input point with auxiliary information attach.
> Prime example is the pinch gesture with thumb and index - it's not actually
> two separate points, it's one interaction. Having two master devices for
> this type of gesture is overkill. As a rule of thumb, each hand from each
> user usually constitutes an input point and thus should be represented as a
> master device.

This makes sense to me. There are cases where the user may want to use both 
index fingers as a pinch (such as on table top surfaces), so it doesn't fit 
100%.

> Where the subdevices are present on demand and may disappear. They may not
> even be actual devices but just represented as flags in the events.
> The X server doesn't necessarily need to do anything with the subdevices.
> What the X server does need however is the division between the input points
> so it can route the events accordingly. This makes it possible to pinch in
> one app while doing something else in another app (note that I am always
> thinking of the multiple apps use-case, never the single app case).

This is something that I think is very important: keeping the multiple 
application, multiple user use-case in mind.

> When I look at the Qt API, it is device-bound so naturally the division
> between the devices falls back onto X (as it should, anyway).
> The tricky bit about it is - at least with current hardware - how to decide
> how many slave devices and which touchpoints go into which slave device.
> Ideally, the hardware could just tell us but...

The approach I used in Qt was to simply group the touch points by target. 
Any points over window A are grouped and sent together, but independently of 
the points over window B.

> this approach works well for mouse emulation too, since the first subdevice
> on each touch device can be set to emulate mouse events. what it does lead
> to is some duplication in multi-pointer _and_ multi-touch aware applications
> though, since they have to be able to differ between the two.
>
> until the HW is ready to at least tell the driver what finger is touching,
> etc., the above requires a new event to label the number of subdevices and
> what information is provided. This would be quite similar to Qt's
> QTouchEvent::TouchPoint class and I believe close enough to Window's
> approach?

Correct. Mac OS X's NSTouch and the iPhone UITouch events look the same as well.

-- 
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway


More information about the xorg-devel mailing list