Bradley T. Hughes at
Mon Mar 1 07:36:05 PST 2010

On 03/01/2010 04:26 PM, ext Matthew Ayres wrote:
> On Mon, Mar 1, 2010 at 3:26 PM, Matthew Ayres<solar.granulation at<mailto:solar.granulation at>>  wrote:
>> On Mon, Mar 1, 2010 at 3:05 PM, Bradley T. Hughes< at< at>>  wrote:
>>> On 03/01/2010 03:34 PM, ext Daniel Stone wrote:
>>>> On Mon, Mar 01, 2010 at 02:56:57PM +0100, Bradley T. Hughes wrote:
>>>> This is where the context confusion comes in. How do we know what the
>>>> user(s) is/are trying to do solely based on a set of x/y/z/w/h
>>>> coordinates? In some cases, a single device with multiple axes is enough,
>>>> but in other cases it is not.
>>> Sure.  But in this case you don't get any extra information from having
>>> multiple separate devices vs. a single device.  The only difference --
>>> aside from being able to direct events to multiple windows -- is the
>>> representation.
>> Correct. However, I think that being able to direct events to multiple
>> windows is the main reason we're having this particular discussion. How do
>> we do it, given the current state of the art?
> This question made me feel like I was at an icecream stall, trying to
> pick  a flavour I like that doesn't have too many bugs in it :P

Heh :P

> > > If the hardware is intelligent enough to be able to pick out different
> > > fingers, then cool, we can split it all out into separate focii and it's
> > > quite easy.
> > I don't think hardware is that intelligent... yet. I forget the name of
> > the program (not CCV as far as I know), but there does exist a program that
> > implements the TUIO protocol WITH support for object-id's. It can do object
> > recognition under special circumstances by looking for and identifying
> > infrared reflectors placed on the table's surface (and these reflectors are
> > often attached to an object). Programs could then map these object id's to
> > something meaningful (object id 5, mapped to "Brad's phone", could sync my
> > email, for example). I don't know of anything that tries to identify
> > individual fingers, though.
> reacTIVision. My very involvement here is a result of wanting to use
> reacTIVision's fiducial markers in MPX. I consider the availability of
> fiducial tracking vital and imagine each registered fiducial being slaved to
> a unique MD.

Right, reacTIVision, thanks.

> I have high hopes of Ryan Huffman's xf86-input-tuio driver and am
> looking  forward to inclusion of certain features to ease this behaviour.
>>> Failing that, how are we supposed to do it? Say two people have a
>>> logical button press active (mouse button, finger down, pen down,
>>> whatever) at once.  Now a third button press comes along ... what do we
>>> do? Is it a gesture related to one of the two down? If so, which one
>>> (and which order do we ask them in, etc).  A couple of years ago we
>>> still could've guessed, but as Qt and GTK are now doing client-side
>>> windows, it's really hard to even make a _guess_ in the server.
>> Right, and this was Peter's point... the X server can't know it and
>> shouldn't try to guess. What I did in Qt was to deliver the 3rd touch point
>> together with its closest neighbor (if the 3rd touch point was not over a
>> widget explicitly asking for touch events, that is).
> To me this sounds almost to be saying that touch events should be
> handled  no differently than mouse events, but that doesn't seem right. A mouse is
> always present, it always has a position. A touch-sensitive slave/physical
> device may always be attached, but unless something is touching it, isn't it
> essentially absent?

Yes, essentially. This could be where the sub-device idea comes in. The 
physical device is there, but here are no active points generating events. 
As it stands today, the last touch leaves the X pointer in the location of 
the last touch, and that can generate Enter/Leave events should the window 
structure under the pointer changes.

Bradley T. Hughes (Nokia-D-Qt/Oslo), at
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway

More information about the xorg-devel mailing list