[RFC XI 2.1 - xf86-input-evdev 2/3] Add experimental XI 2.1 multitouch support

Chase Douglas chase.douglas at canonical.com
Mon Dec 13 12:28:30 PST 2010


On 12/09/2010 05:29 PM, Peter Hutterer wrote:
> On Mon, Dec 06, 2010 at 09:41:48AM -0800, Chase Douglas wrote:
>> On 12/05/2010 10:41 PM, Peter Hutterer wrote:
>>> if the kernel can send it through one device, we can handle it, right?
>>> if both are sent through the same axes (and need a serial or something to
>>> differentiate like the wacom drivers) then yes, they need to be split up
>>> into multiple devices.
>>
>> I think we're in agreement about what we can handle. We should be able
>> to handle whatever the kernel sends us. But I feel that it's impossible
>> to have a kernel device that sends touch information through the same
>> properties for different surfaces. Thus, we don't need per-axis touch modes.
> 
> it's much easier for the kernel to add new information to the event
> interface that it is for us to change the protocol. the initial MT protocol
> didn't have pressure, for example, no tracking ID (IIRC), etc. all these
> have been added since, while we were stuck with the same protocol version.
> how hard would it be to add REL_MT_POSITION_X to the kernel and have devices
> send events?

Certainly there are issues we find as we develop new protocols and then
have new capabilities added to devices. There's a limit, though, to what
we can prognosticate. I believe we've reached such a limit here.

As far as REL_MT_POSITION_X is concerned, I think the current X protocol
would handle that just fine. Of course, I don't have any idea what
REL_MT_POSITION_X means, but I think it would just be turned into
positions on screen in the server before events are sent to clients.

> so unless you're _sure_ that in two or three years time we won't have
> devices with multiple modes I'd say add a per-valuator flag. Worst case,
> it's always 0 or 1 for all valuators for the forseeable future.

I understand the urge to make it per-valuator, and it wouldn't hurt, but
it seems like over-engineering the issue. I can't even figure out how
events are supposed to be handled with per-valuator modes.

For example, lets say there's a device with a touchscreen and two touch
strips. Let's assume the kernel provides us with ABS_MT_POSITION_{X,Y}
and maybe ABS_STRIP1_Y and ABS_STRIP2_Y. Each area also has a touch
pressure axis. Here's what the touch class could look like with
per-valuator modes:

0: Touchscreen X, direct
1: Touchscreen Y, direct
2: Touchscreen Pressure, direct
3: Strip 1 Y, indirect
4: Strip 1 Pressure, indirect
5: Strip 2 Y, indirect
6: Strip 2 Pressure, indirect

X valuator axes are labelled. This is how the client knows what the axis
represents. We would need to define new labels to distinguish axes 1, 3,
and 5 from each other. Similarly for 2, 4, and 6. Imo, doing so would
just be messy and difficult for client applications to use.

Further, I don't believe the kernel evdev interface would ever be
extended in the same way.

Note that the above is based on touch strips which could be provided for
through the normal pointer class valuators much more easily. However,
the issues noted above may be extrapolated to multiple 2D input surfaces
of the same device.

Instead, we could simply use separate input devices for each distinct
physical input region. If you expand the above scenario to a device with
two or more 2D touch areas, then I think the kernel would do this in the
driver anyways.

To summarize, I just don't understand how per-valuator modes would give
us any better functionality without being much messier than separate
input devices. It feels like reinventing the master-slave pointer
mechanism. I would be happy to reconsider if anyone could come up with a
viable use case that would illustrate the benefits of per-valuator modes.

-- Chase


More information about the xorg-devel mailing list