Input and games.
Todd Showalter
todd at electronjump.com
Mon Apr 29 12:01:13 PDT 2013
On Mon, Apr 29, 2013 at 1:44 PM, Bill Spitzak <spitzak at gmail.com> wrote:
> Has anybody thought about pens (ie wacom tablets)? These have 5 degrees of freedom (most cannot distinguish rotation about the long axis of the pen). There are also spaceballs with full 6 degrees of freedom.
I think pens need to be their own thing; the needs of pen-based
programs are usually pretty specific. It would be nice if there was a
pen protocol that handled angle, pressure and the like, but at the
same time I get the sense that the year to year changes in pens are
pure improvements, which means the protocol would probably need to
express things like pressure and angle in float values. Otherwise,
you'll quickly reach a place where the pen hardware is reporting more
bits of precision than the protocol can encode. Or wacom pens will
start reporting rotation as well.
There's also potentially a good argument to be made for supporting
6dof devices, but the bag of devices it would be supporting is...
mixed. Off the top of my head, you'd be looking at potentially
supporting:
- the wiimote and nunchuk (accelerometers plus camera support)
- the ps3 dual shock controller (accelerometers)
- the ps3 move controller and nav controller (accel. plus camera support)
- the kinect (camera)
- the leap motion (camera)
- the razer hydra (???)
- the 3dconnexxion space pilot, space mouse and space navigator (sensors)
It's hard to boil that down to any kind of common functionality.
I strongly suspect you'd have to have the protocol be
self-descriptive. The connect message would have to describe the
capabilities of the device. That would be way better than nothing,
but not a lot of fun to write or use. You'll run into fun things like
"the capabilities of the wiimote can change drastically at runtime,
depending on what the player decides to plug in to it". For instance,
the player could yank the nunchuck from the wiimote (removing a stick,
a couple of buttons, and a low-res accelerometer) and replace it with
a classic controller (two sticks, two shoulder triggers, dpad,
buttons) plugged in through a motion plus (high-res auxiliary
accelerometer). And then pull that and jack the wiimote into a light
gun.
In some cases (kinect, notably) the data coming in is very raw; I
haven't worked with kinect myself, but my understanding is that what
you get is a stream of image pairs; half of each pair is the (possibly
compressed?) RGB camera image, and half of each pair is a depth map
from the infrared camera. My understanding is that everything else is
signal processing on the host.
At the other end, we have the wiimote pointer, which is also an
infrared camera streaming low-res images, looking at a pair of
infrared LEDs (the two ends of the hilariously misnamed "sensor bar").
As with the kinect, as far as I understand all the magic happens on
the host in signal processing routines. The wiimote has
accelerometers as well (especially if you have a motion plus
attached), but the "light gun" screen pointer functionality is
entirely driven by the camera looking at two LEDs a known, fixed
distance apart.
That's the kind of reason I'm evangelizing a standard gamepad
protocol; solving the whole game controller input problem (with force
feedback and adjustable resistance, lights, sound playback, dpi
settings, alternate configurations and all controls made available) is
desirable for some games, but it is also a herculean task both for the
protocol source and for whatever consumes it. Most games just want
simple mouse or gamepad input. I'd prefer if the more complex input
is available for games that want to use it, but I think if a game
wants to make full use of unusual hardware then it's reasonable to
expect that game to do the heavy lifting.
> One idea I remember from Irix was that all the analog controls were 1-dimensional. A mouse was actually 2 analog controls. This avoids the need to define how many degrees of freedom a control has, instead it is just N different controls. Quaternions are a problem though because the 4 numbers are not independent so there must be a way to get a set of changes together.
That kind of worked on IRIX because the standard controls were
simple (three button mouse, standard keyboard) and the nonstandard
controls (like those insane video control battleboards) were driven by
very expensive per-seat-per-year cost software that could afford to
burn engineering time building custom support for something that
wouldn't look out of place in the cockpit of something from Gundam.
In practice, trying to write end-user software on self-descriptive
input protocols winds up being a bit of a pain, at least in my
experience.
> Another idea was that buttons had the same api as analog controls, it's just that they only reported 0 or +1, never any fractions (and since it sounds like some controls have pressure-sensitive buttons this may make it easier to use the same code on different controls).
Pressure sensitive buttons need to deliver pressure information
separately from pressed/not pressed, because otherwise you get
software written by people who don't have pressure sensitive buttons,
and they just test for 1.0f to see if something is down. Which works
just great, because their pure digital controller buttons always
return 1.0f when touched. At which point the people with pressure
sensitive buttons file bugs saying "why do I have to really hammer the
buttons to get the menu to come up?".
Todd.
--
Todd Showalter, President,
Electron Jump Games, Inc.
More information about the wayland-devel
mailing list