[Spice-devel] [Qemu-devel] paravirtual mouse/tablet

Peter Hutterer peter.hutterer at who-t.net
Thu Jan 20 18:21:41 PST 2011


On Thu, Jan 20, 2011 at 12:54:07PM +0100, Gerd Hoffmann wrote:
> On 01/20/11 07:25, Peter Hutterer wrote:
> >Hi guys,
> >
> >I apologize for replying this way, I wasn't on the spice-list, jrb pointed
> >out this thread to me. For those who don't know me, I'm the input maintainer
> >for X.Org. I also know very little about spice, so please take the comments
> >below accordingly. Comments regarding a few things that showed up in this
> >thread, slightly out of order because I read through the web-interface:
> 
> Thanks four your input.  Think I'll start with some background
> information ...
> 
> Mouse handling in virtual machines is a bit tricky.  The guests
> display is usually some window on the host machine.  The mouse
> cursor is at some position within that window, and it works best
> when we can pass through that absolute position to the guest.  The
> device is something which doesn't exist as real hardware.  It is
> like a mouse, but with absolute instead of relative coordinates.  Or
> like a tablet without a pen.

the absolute/relative issue is not an issue, we've been able to handle
absolute devices for ages. what matters though (at least in X) is how your
guest window registers for events. if you only register for core events,
you'll only get x/y + buttons, for both mouse and tablet. If you use XI or
XI2, you'll get all extra information such as pressure, multiple axes etc.

since core is a small subset of XI, I recommend focusing on XI (better XI2)
for scoping out the requirements. XI2 provides 16 bit for button number and
axis number, and 32 bit for keycodes.
everything negotiable, we support devices with and without axes, with and
without buttons, with and without keys and pretty much any combination of
those three.
not counting proximity...

> Usually a virtual usb tablet device is used to handle this.  Problem
> with this is that usb emulation is quite cpu expensive for hardware
> design reasons, so we are looking for a better way.  The idea us to
> use a virtio-serial channel, which is basically a bidirectional
> stream (like a unix socket) between host and guest and run some
> to-be-designed protocol there.
> 
> The spice-specific issue here is that spice supports multihead, i.e.
> you have two displays in the guest and two windows on the host, and
> mouse positions are reported as (x,y,window).  Question is how to
> handle this best ...

how much do you know about the window configuration?
if you know the root window offset of each window, you can add this to the
event coordinates and the initial axis range, so the offset will be correct
in the client.

> >= Multiple Devices =
> >X has supported multiple independent pointer/keyboard devices. Not just
> >physical devices, multiple cursors since late 2009 and it's now standard on
> >any up-to-date linux distribution (including RHEL6). Not having device
> >identifiers in the core protocol was one of our greatest regrets with input
> >handling.
> >
> >X abstracts into so-called master devices and slave devices. The default
> >setup is to have all slave devices (==physical devices) send events through
> >the first master device pair (first cursor and first keyboard focus). This
> >can be reassigned at runtime, so one can create a second cursor and assign
> >any device to this. The cursors can be used simultaneously. Same for
> >keyboards, including gimmicks such as different layouts.
> 
> How can I configure this btw?

only at runtime, there's no static configuration for this. the quickest is
the following set of commands:

    xinput list #for a list of current device names
    xinput create-master "somename"
    xinput reattach "my mouse device" "somename pointer"
    xinput reattach "my keyboard device" "somename keyboard"

and you've got a second pair of devices.

> >= Mouse Wheel =
> >Mouse wheel is buttons 4,5,6,7 in X by convention, but that is not true in
> >other systems.
> 
> Hmm, isn't that the case even for (at least some) hardware?  With
> both of my wheel mouses the wheel moves forward in steps ...

IIRC, the ImPS/2 protocol allows for 4 bits per wheel (don't quote me on
that). depending how the kernel abstracts it you may see increments of 1/-1
only but you can get more than that.

in the X.Org evdev driver, we convert from REL_WHEEL and REL_HWHEEL to a
sequence of button events. so we explicitly convert axis data to buttons
http://cgit.freedesktop.org/xorg/driver/xf86-input-evdev/tree/src/evdev.c#n597

> >labelling. Having said that, I don't think 32 buttons are enough.
> >Why not just send the button state as it changes?
> 
> x events carry both the button pressend/released and the mask of
> currently pressed buttons, which I tried to mimic.  The mask is
> convenient although redundant.  Removing it will kill the 32 buttons
> limit ;)
> 
> How do you label the buttons?  Is there a enum?  Or simply strings?

core X supports up to 255 buttons on a device (uint8 is the protocol field).
only the first 5 because IIRC it also shares that 8 bit field with the first
three modifiers (would need to look up the details here but you get the
point). 
there's no mask for higher buttons. so it's already pointless for
anything but the most common ones. XI2 has 16-bit fields for button numbers,
so definitely more than 32 :)

button labels are provided by the driver (if possible) through X atoms of
defined strings. see xserver-properties.h for more info 
http://cgit.freedesktop.org/xorg/xserver/tree/include/xserver-properties.h?id=7c6b5458de9bc7f6cd972a36b56888aaa3d201ee
note that these are only the "standardised" ones, a device/driver is free to
add different ones.

button labels are available through XI2 or XI 1.5.
 
> >Note that there's devices that are both pointer and multitouch devices
> >(Apple's MagicMouse and MagicTrackpad).
> 
> Ok, I understand that for the mouse.  But for the pad?  Isn't there
> just a surface to touch and nothing else?  Does the device behave
> differently depending on how many fingers it detects on the surface?

afaik, the kernel converts some fingers into the standard non-MT protocol
events in addition to the MT events so the device is usable by userspace
apps without MT support. With XI 2.1 (X server multitouch) coming up, you'll
see these events happen in parallel in some cases.

> >= Pressure =
> >Pressure is not a binary state. All devices that I've had to deal with so
> >far have a pressure range (usually client-configured) and then events happen
> >based on passing that threshold (+ a tolerance range, if applicable).
> 
> Ok.  I think for the virtual hardware it is just fine to report the
> pressure to the guest and let the guest handle the interpretion of
> the data (i.e. should that be a mouse click or not depending on the
> threshold and maybe other data).

heh. that's where it gets interesting again if you get the virtual hardware
through X events on the window. (mini glossary: X client - the process that
pops up the window displaying the guest)

A number of X drivers have features built-in along the lines of "if pressure
goes past this point, send a click". You as the X client will receive the
press but you cannot know if that was in result to pressure or an actual
click. all this is hidden from you. so in reporting this to the guest,
you're now reporting a button click instead of the pure pressure.

want a headache scenario? assume the host touchpad is configured for tapping
to send a left button event but the client is configured to send a right
click event on tapping.
the host driver gets the tapping information from the hardware, sends a
button 1 event which is received by the X client and forwarded as button 1
event. the client driver only receives a button 1 event and forwards that on
as button 1. the client cannot know that this event comes from a tap.
there's no solution to this other than to forward the usb device itself.
once you abstract, you lose information (and have to decide how much you're
willing to lose).

> >This goes as far as auto-adjusting the threshold to accommodate worn styli
> >tips as we do in the wacom driver.
> >
> >It's not quite as simple as Alex wrote:
> >"Touching means:
> >     Touchpad: movement of cursor
> >     Tablet: pressing down a pen"
> 
> I don't see any reason to make a difference between a tablet and a
> touchpad (from the virtual hardware point of view).  Just passing on
> the information we have:  position, pressure, ...

correct. IMO the best thing you can do is get as much information from the
device as possible and pass it on unmodified, hoping that the recipient
will interpret it correctly
> 
> >We use the tool type identifier for this as there are devices that are
> >_both_ touch and tablet devices (e.g. ISDV4 serial tablets).
> 
> i.e. the device can figure whenever you used the finger or the pen?

correct. the device may even have a different axis range depending on the
tool used although it is the same physical surface (I only learned that this
morning :)

Cheers,
  Peter


More information about the Spice-devel mailing list