RFC: multitouch support v2
krh at bitplanet.net
Thu Dec 22 08:59:36 PST 2011
2011/12/22 Chase Douglas <chase.douglas at canonical.com>:
> On 12/22/2011 07:53 AM, Kristian Høgsberg wrote:
>> 2011/12/22 Chase Douglas <chase.douglas at canonical.com>:
>>> I don't know wayland's protocol yet, but shouldn't enter/leave events
>>> have some kind of device identifier in them? I would think that should
>>> alleviate any client-side confusion.
>> I don't think so. To be clear, the problem I'm thinking of is where
>> the toolkit does select for touch events, but only to do client side
>> pointer emulation in the toolkit. What should a client do in case the
>> pointer is hovering over a button in one window, when it then receives
>> a touch down in another window? The toolkit only maintains one
>> pointer focus (which is current in that other window), and what
>> happens when you receive touch events in a different window? What
>> kind of pointer events do you synthesize? We can't move the system
>> pointer to match the touch position.
> In X we move the cursor sprite to the first touch location, always. This
> is because you have moved the master pointer, so the sprite needs to be
> in sync with the master pointer location.
How do you move the sprite without doing pointer emulation? If the
sprite enters a window, you have to send enter/leave events, and
motion events as it moves around. When I say that I don't know if we
need pointer emulation, I mean that there is no sprite associated with
the touch events, there are no enter/leave events or buttons events.
When you touch a surface, you only get a touch_down event, then
touch_motion and then touch_up.
> Off the top of my head, I would think Wayland should automatically
> create the equivalent of X master pointer devices for each touchscreen
> device. There shouldn't be a sprite for touchscreens, though the WM
> could do fancy effects like MS Surface if you wanted it to.
Right... in the MPX sense, right? So you could have a keyboard and
mouse combo controlling one pointer/kb focus and the touch screen
being its own master device. Then maybe you could have one person
using the touch screen UI, and another person using the kb/mouse
combo. That's kind of far fetched, of course, but I think the main
point is that there's no inherent association between a kb/mouse combo
and a touch screen. On the other hand, what about a setup with two
mouse/kb combos (master devices) and a touch screen... you'd expect
tapping a window on the touch screen to set kb focus, but if you have
multiple master kbs, which kb focus do you set? Maybe we're just
doomed for trying to make both pointer and direct touch interaction
work in the same UI.
>> I guess you could synthesize a leave event for the window the pointer
>> is in but remember the window and position. Then synthesize an enter
>> event for the window with the touch event and send button down and
>> motion events etc. Then when the touch session is over (all touch
>> points up), the toolkit synthesizes an enter event for the window and
>> position the pointer is actually in.
> That sounds hacky, and trying to fit multiple input devices into a
> single input device world.
Most toolkits (and users tbh) still live in a single input device
world. But for toolkits that support multiple pointers entering and
leaving their windows and buttons etc, there's no need to play tricks
with the pointer focus like that, of course.
> I would suggest not sending or synthesizing enter/leave events for touch
> events. The toolkit can switch focus to the last window with a touch
> event or a pointer motion event.
Yeah, again, this is all about client-side (toolkit) pointer
emulation. We don't send enter/leave events for touch, that wouldn't
make sense. But if you have a toolkit, where the higher level logic
doesn't understand multiple pointers/devices, you have to do something
to avoid confusing it when it thinks the pointer is in one window and
then suddenly it gets motion events in another.
More information about the wayland-devel