GSoC Proposal the second

janikjaskolski at aol.com janikjaskolski at aol.com
Thu Mar 24 05:57:21 PDT 2011


 

 Hello everyone,

this time, slightly more specific, the second version of my proposal (thanks for the feedback marcoz & cnd)

Based on increasing every day use of Convertible-Notebooks, Tablet-PCs and other touchscreen controlled devices, the need for more comprehensive control elements must be adressed.

Even though Multitouchinteraction and Interactive Surfaces already present control elements, the usability is not comparable to the variety of Mouse / Keyboard / etc. input possibilities. A user is very limited in the way he or she can interact with the machine, once the only input device is a touch sensitive surface.

Those limitations may be lifted through the welcoming of additional sensory input and connecting that input to already registered touch triggered events.
I currently work on a bachelor thesis, which incorporates microphone feeds using a c standard audio lib and reduces that input to a fast queryable format to enable the correlation with xf86-input-evdev events. I would regard that thesis as a prototype to the possible GSoC project.

As the GSoC project, I could work on writing a driver, that emulates the entire functionality of a 5-7 button mouse for touchscreens for starters. The triggering actions would be various combinations of tapping / scratching / knocking on the screen. 

For example: 
Knocking the screen once would translate to a right mouse click. 
Knocking it twice and or three times could be the fourth and fifth button (which would be very useful to f.e. navigate inside a browser). 

Futhermore, something interesting could be scratching the screen in a discrete area, which would trigger the key events alt + f4, which would kill the currently active window. 
Going in that direction, the only limitation is finding enough distinguishable combinations of touching and sounds to emulate useful control key events.

These are some examples of what is possible through audio analysis and spike detection. 
My intention would be, that the driver is build in such a way, that it does not matter what sensory input the results of the input analysis come from. If its audio or video or any other device. I would supply an API that makes future extensions in these directions as easy as possible. 


The GSoC project in short:
- evdev extension to expand touchscreen control
    - covering the full functionality of a 5-7 button mouse
    - cover important key combinations
- abstract driver which works with different sensory input devices
- API interface for interaction with the extension from s.i.d. analysis code

Possible future extensions: 
- more covered funcionality :) 
- integration of camera / video analysis (waving in front of the camera f.e.)

Best regards,

Janik aka. Sticks


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.x.org/archives/xorg-devel/attachments/20110324/5d53bff6/attachment.htm>


More information about the xorg-devel mailing list