Another approach to multitouch handling
peter.hutterer at who-t.net
Mon Jun 14 23:32:18 PDT 2010
On Fri, Jun 11, 2010 at 01:27:02PM +0200, Carlos Garnacho wrote:
> been thinking a while about this...
> On Mon, 2010-06-07 at 01:20 -0400, Rafi Rubin wrote:
> > So are you saying you actually want to be able to subscribe to events from a mt
> > finger in window that's next to the window with the rest of the fingers? Is
> > that really a good idea?
> I really think that should be left out for clients to choose.
> > Perhaps I should clarify my current understanding and thoughts.
> > I thought we were talking about having a pointer with a single conventional
> > group/cluster position. In this model, fingers show up as mt positions
> > annotated on the pointer, and the cluster position may or may not be the
> > position of one of those fingers. I see that cluster position both as focus
> > control for the mt contact set as well as a way to use mt as a conventional pointer.
> There is undoubtedly a need for some kind of device grouping, but IMHO
> it shouldn't be at the lowest level in the stack.
> For example, amongst the GTK+ patches there is a GtkDeviceGroup, several
> of these can be created for a single widget/window, and I use that in
> the test application that displays images and let you move/rotate/resize
> them. Each image has one of such groups, each group only allows up to 2
> devices, and whenever a device updates, a signal is emitted for the
> group it belongs to. Given the 1:1 relation between images and groups,
> you also know which area should be invalidated when a device updates.
> I guess my argument is... What a device cluster means is highly
> context-dependent, and at the driver level there's just no context.
there may at the X server level though (for keyboard handling for example).
Some feedback channel to inform the server about device groups may come in
> > I see the selection of cluster position as a bit of an arbitrary implementation
> > detail (or even a normal option to accommodate differing preferences). Four
> > ideas come to mind:
> > 1. eldest active contact
> > 2. first contact (cluster doesn't move if that first finger stays off the sensor)
> > 3. geometric center
> > 4. completely independent.
> > Consider the magic mouse for a moment. It has a real conventional pointing
> > device and an independent mt surface. I think that clients that want to
> > subscribe to finger positions on that touch surface should see them as some how
> > related to the position of the conventional pointer.
> It is true that the multiple device approach fits better for multitouch
> screens than for multitouch mice/touchpads, at least intuitively.
> Perhaps the driver should be sending relative events to the same Window
> the main device is on at that moment for such devices.
given the right protocol requests, a client could issue a gesture passive
grab on the device. This solves the touchpad case to provide similar
functionality like OS X does at the moment (4 finger swipe for Expose,
etc.). If you make the request dependent on the number of fingers, then
well, you're nearly there. All you have to do is the short stroll through
the swamp filled with hungry crocodiles to reach your goal.
> > I think we should eventually talk about supporting mt from a one or more sensors
> > in multiple windows/locations. But I would like to think in terms of spatial
> > clustering. For example, we can cluster based on the size of a hand. Each hand
> > gets a core pointer with one or more fingers annotated.
> Then we'd be talking about multiple devices with annotated touchpoints
> for a single hardware device, right? Unless the hardware provides such
> information, I don't think doing smart clustering like that will yield
> anything profitable, I could put fingers so both hands' areas intersect.
> If the hardware would support specifying the user/hand an event belongs
> to, it makes most sense to me to have some way to relate DIDs to their
> main SD able to send core events.
More information about the xorg-devel