multitouch

Daniel Stone daniel at fooishbar.org
Mon Mar 1 04:55:16 PST 2010


Hi,

On Mon, Mar 01, 2010 at 12:09:41PM +0000, Matthew Ayres wrote:
> On Mon, Mar 1, 2010 at 11:22 AM, Daniel Stone <daniel at fooishbar.org> wrote:
> > Not to mention the deeply unpleasant races -- unless you grab
> > XGrabServer, which is prohibitively expensive and extremely anti-social.
> 
> I'm not sure I understand a race to mean what it is being used to mean
> here.  My interpretation of the term would not, as far as I can see,
> apply here.  If someone could point me to documentation that would
> explain this type of race, I would appreciate it.

See below for a concrete example ...

> > I still think a multi-level device hierachy would be helpful, thus
> > giving us 'subdevice'-alike behaviour.  So if we were able to go:
> > MD 1 ->
> >        Touchpad 1 ->
> >                      Finger 1
> >                      Finger 2
> >        Wacom 1 ->
> >                   Pen 1
> >                   Eraser 1
> > MD 2 ->
> >        Touchpad 2 ->
> >                      Finger 1
> >
> > and so on, and so forth ... would this be useful enough to let you take
> > multi-device rather than some unpredictable hybrid?
> 
> This is roughly the kind of hierarchy I had intended to imply, but
> there is a caveat.  Touchpads and Wacom devices are clear cases of
> single-user input, but a touch screen must be expected to support more
> than one simultaneous user.  This requires splitting its inputs
> somehow.

I'm not sure what you mean here?

> > (What happens in the hybrid system when I get an event from finger 1,
> > decide I like it, take out a grab, and then finger 2 presses on another
> > window.  Do I respect the event and give the app the finger 2 press it
> > likely doesn't want, or break the grab and deliver it to another client?
> > Neither answer is pleasant.)
> 
> This is another possible use of the sub-device hand-off I described
> before (yes, I really meant sub-device rather than slave).  Once again
> it would be up to the application to decide whether or not it wants
> this input and, if it does not, it can request that it be moved to
> another device.
> 
> Advantage: This would enable gestures on small controls, such as
> existing taskbar volume controls: Touch the icon, swipe a finger near
> by and that controls the volume?
> 
> Disadvantage: In creates latency, at best, if the new touch event (on
> a screen, rather than one of the above mentioned devices) is not
> intended as part of the same 'gesture'.  At worst it creates
> conceptually erroneous behaviour.

Right.  So now imagine the following happens:
 * first finger pressed over window A
 * server delivers event to client C
 * client C: 'ooh hey this could trigger gesture events, give me
   everything'
 * server: okay, cool!
 * second finger pressed over window B
 * but the server notifies client A due to the grab

Now imagine this scenario:
 * first finger pressed over window A
 * server delivers event to client C
 * (client is scheduled out or otherwise busy ...)
 * second finger pressed over window B
 * server delivers event to client D
 * client C: 'oooh hey, that first finger could trigger gesture events,
   give me everything!'
 * meanwhile the volume isn't changed and you've just clicked something
   wholly unrelated on another window; hopefully it's not destructive

Adding additional layers of complexity, uncertainty and unpredictability
makes this much worse than it can be.

I don't really see the conceptual difference between multiple devices
and multiple axes on a single device beyond the ability to potentially
deliver events to multiple windows.  If you need the flexibility that
multiple devices offer you, then just use multiple devices and make your
internal representation look like a single device with multiple axes.

Given that no-one's been able to articulate in much detail what any
other proposed solution should look like or how it will actually work
in the real world, I'm fairly terrified of it.

Can you guys (Bradley, Peter, Matthew) think of any specific problems
with the multi-layered model? Usecases as above would be great, bonus
points for diagrams. :)

> A related point: I've read, and assume it is still the case, that for
> MPX hotplugging is supported.  Now if this is the case, is there
> really much difference between that and creating a new master device
> when/if it is determined that a new touch event is determined to be a
> separate point of interaction?  Would it not be the case that the
> server 'hotplugs' a new device and routes the input through it?

It's pretty much exactly the same, yeah.

> If this is too expensive, it just calls for attempts to streamline the process.

Well, there's just not a lot we can do to streamline it.  We could beef
up some of the events and eliminate roundtrips, but fundamentally the
problem is that it requires the client to grab the device after it's
created, and the latency here can be entirely arbitrary.  When you're
targetting a few milliseconds _at most_ for event delivery from kernel
to client, this becomes impossible.

When we get to Wayland, this becomes pretty much doable, but while all
knowledge of widgets, window management, etc lie outside the server and
thus require server <-> client synchronisation at every step, we're
always going to have problems, and the best we can do is mitigate it by
making the process as streamlined, simple and thoroughly predictable as
possible.  It's deeply confusing at the best of times, so making the
behaviour more arbitrary still arguably isn't the answer. :)

Cheers,
Daniel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20100301/f44d0004/attachment.pgp>


More information about the xorg-devel mailing list