[RFC XI 2.1 - inputproto] Various fixes in response to Peter Hutterer's review
Chase Douglas
chase.douglas at canonical.com
Fri Dec 3 07:13:02 PST 2010
On 12/03/2010 01:41 AM, Peter Hutterer wrote:
> On Thu, Dec 02, 2010 at 10:30:56AM -0500, Chase Douglas wrote:
>> A few points I'd like to make:
>>
>> 1. Most MT apps really just want gestures like pinch to zoom or pan to
>> scroll.
>
> Where is this assumption coming from?
>
> Whenever I get to play with an iPhone (for MT user interface
> research only, of course ;), I see plenty of MT input that are not any of
> these gestures. Admittedly, most of the apps I've tried lately were games,
> but the assumption that "most MT apps just want gestures" is dangerous and
> potentially limiting.
>
> AFAICT, we don't know yet, at least not on the free desktop. We don't have
> the number of MT apps to even start talking of "most". Remember, technically
> you can claim 2 out of 3 is "most" but 3 apps is not a useful sample.
I should clarify what I mean, as I haven't really been precise in this
regard.
I believe, though I don't have any data, that most MT apps will fall
into three categories:
1. Document-based apps where gestures will be used to manipulate the
canvas. For example, web browsers with pinch to zoom and pan to scroll
and photo editors with rotate to rotate an image.
2. Specialized MT manipulation applications. For example, MT drawing
applications or 3D modelling applications.
3. Games, nuf said :).
In the first category, I don't think these applications need to listen
for MT events at all. Of course there will be exceptions, but most will
be fine just listening for general gestures. This will reduce
unnecessary wakeups.
In the second category, we have the potential for much more complex
interactions between gestures and MT. These applications will want at
least the bare MT events from X, and they will likely want the entire
path of all touches. Further, they will probably also want to provide
immediate feedback to the user when they touch the screen. Tentative
events covers this scenario.
The third category is filled with applications that will likely want
only MT events, and will not want global gestures to be active. In our
Unity environment, we will treat these windows as "greedy", meaning we
will immediately replay touches beginning over them, and we will forgo
all gesture recognition over the windows. Fullscreen MT apps may
actively grab the touch device as well, reducing the overhead of the
initial event propagation to the window manager.
I think most MT/gesture applications will fall under category one or
three. The relative distribution between them is not important. In these
categories, there should not be any touch event storm due to tentative
events. The first category will not receive MT events at all, and the
third category will, at most, send a few events at the beginning of a
touch to the window manager.
The second category is more interesting because I can envision these
apps wanting a mix of gestures and MT events within the same window.
Imagine a paint canvas widget next to a scrollable widget for selecting
a paint tool. In these applications we will see more tentative events
being processed. However, I believe tentative events are necessary for
feedback and are mitigated by the fact that there will usually only be
up to two clients receiving events at any time: the gesture recognizer
and the app. Further, I think this category of applications is much more
limited than the first and third. In fact, I think most serious drawing
applications will still be single touch plus gestures anyways.
-- Chase
More information about the xorg-devel
mailing list