Carsten Haitzler (The Rasterman) raster at
Mon Jan 18 14:54:37 PST 2010

hey guys (sorry for starting a new thread - i only just subscribed - lurking on
xorg as opposed to xorg-devel).

interesting that this topic comes up now... multitouch. i'm here @ samsung and
got multi-touch capable hardware - supports up to 10 touchpoints, so need

now... i read the thread. i'm curious. brad - why do u think a single event (vs
multiple) means less context switches (and thus less power consumption, cpu
used etc.)?

as such your event is delivered along with possibly many others in a buffer - x
protocol is buffered and thus read will read as much data as it can into the
buffer and process it. this means your 2, 3, 4, 5 or more touch events should
get read (and written from the server side) pretty much all at once and get put
into a single buffer, then XNextEvent will just walk the buffer processing the
events. even if by some accident thet dont end up in the same read and buffer
and you do context switch, you wont save battery as the cpu will have never
gone idle enough to go into any low power mode. but as such you should be
seeing all these events alongside other events (core mousepress/release/motion
etc. etc. etc.). so i think the argument for all of it in 1 event from a
power/cpu side i think is a bit specious. but... do you have actual data to
show that such events actually dont get buffered in x protocol as they should
be and dont end up getting read all at once? (i know that my main loop will
very often read several events from a single select wakeup before going back to
sleep, as long as the events come in faster than they can be acted on as they
also get processed and batched into yet another queue before any rendering
happens at the end of that queue processing).

but - i do see that if osx and windows deliver events as a single blob for
multiple touches, then if we do something different, we are just creating work
for developers to adapt to something different. i also see the arguument for
wanting multiple valuators deliver the coords of multiple fingers for things
like pinch, zoom, etc. etc. BUT this doesnt work for other uses - eg virtual
keyboard where i am typing with 2 thumbs - my presses are actually independent
presses like 2 core pointers in mpx.

so... i think the multiple valuators vs multiple devices for mt events is moot
as you can argue it both ways and i dont think either side has specifically a
stronger case... except doing multiple events from multiple devices works
better with mpx-aware apps/toolkits, and it works better for the more complex
touch devices that deliver not just x,y but x, y, width, height, angle,
pressure, etc. etc. per point (so each point may have a dozen or more valuators
attached to it), and thus delivering a compact set of points in a single event
makes life harder for getting all the extra data for the separate touch events.

so i'd vote for how tissoires did it as it allows for more information per
touch point to be sanely delivered. as such thats how we have it working right
now. yes - the hw can deliver all points at once but we produce n events. but
what i'm wondering is.. should we....

1. have 1, 2, 3, 4 or more (10) core devices, each one is a touch point.
2. have 1 core with 9 slave devices (core is first touch and core pointer)
3. have 1 core for first touch and 9 floating devices for the other touches.

they have their respective issues. right now we do #3, but #2 seems very
logical. #1 seems a bit extreme.

remember - need to keep compatibility with single touch (mouse only) events and
apps as well as expand to be able to get the multi-touch events if wanted.

------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    raster at

More information about the xorg-devel mailing list