Another approach to multitouch handling

Carlos Garnacho carlos at lanedo.com
Fri Jun 11 04:27:02 PDT 2010


Hi!,

been thinking a while about this...

On Mon, 2010-06-07 at 01:20 -0400, Rafi Rubin wrote:

<snip>

> So are you saying you actually want to be able to subscribe to events from a mt 
> finger in window that's next to the window with the rest of the fingers?  Is 
> that really a good idea?

I really think that should be left out for clients to choose.

> 
> Perhaps I should clarify my current understanding and thoughts.
> 
> I thought we were talking about having a pointer with a single conventional 
> group/cluster position.  In this model, fingers show up as mt positions 
> annotated on the pointer, and the cluster position may or may not be the 
> position of one of those fingers.  I see that cluster position both as focus 
> control for the mt contact set as well as a way to use mt as a conventional pointer.

There is undoubtedly a need for some kind of device grouping, but IMHO
it shouldn't be at the lowest level in the stack.

For example, amongst the GTK+ patches there is a GtkDeviceGroup, several
of these can be created for a single widget/window, and I use that in
the test application that displays images and let you move/rotate/resize
them. Each image has one of such groups, each group only allows up to 2
devices, and whenever a device updates, a signal is emitted for the
group it belongs to. Given the 1:1 relation between images and groups,
you also know which area should be invalidated when a device updates.

I guess my argument is... What a device cluster means is highly
context-dependent, and at the driver level there's just no context.

> 
> I see the selection of cluster position as a bit of an arbitrary implementation 
> detail (or even a normal option to accommodate differing preferences).  Four 
> ideas come to mind:
> 1.  eldest active contact
> 2.  first contact (cluster doesn't move if that first finger stays off the sensor)
> 3.  geometric center
> 4.  completely independent.
> 
> Consider the magic mouse for a moment.  It has a real conventional pointing 
> device and an independent mt surface.  I think that clients that want to 
> subscribe to finger positions on that touch surface should see them as some how 
> related to the position of the conventional pointer.

It is true that the multiple device approach fits better for multitouch
screens than for multitouch mice/touchpads, at least intuitively.
Perhaps the driver should be sending relative events to the same Window
the main device is on at that moment for such devices.

> 
> 
> I think we should eventually talk about supporting mt from a one or more sensors 
> in multiple windows/locations.  But I would like to think in terms of spatial 
> clustering.  For example, we can cluster based on the size of a hand.  Each hand 
> gets a core pointer with one or more fingers annotated.

Then we'd be talking about multiple devices with annotated touchpoints
for a single hardware device, right? Unless the hardware provides such
information, I don't think doing smart clustering like that will yield
anything profitable, I could put fingers so both hands' areas intersect.

If the hardware would support specifying the user/hand an event belongs
to, it makes most sense to me to have some way to relate DIDs to their
main SD able to send core events.

> 
> 
> As for sub-sub-devices vs. valuators, that doesn't matter all that much to me. 
> And I think if you establish the meaning, it shouldn't really matter to all that 
> many people.  If you have a clean concept down, then it won't change the client 
> side code all that much if you switch from one to the other.

I don't think it would be such a breeze in practice :)

Cheers,
  Carlos



More information about the xorg-devel mailing list