[RFC XI 2.1 - inputproto] Various fixes in response to Peter Hutterer's review
Peter Hutterer
peter.hutterer at who-t.net
Thu Dec 2 22:41:53 PST 2010
On Thu, Dec 02, 2010 at 10:30:56AM -0500, Chase Douglas wrote:
> On 12/01/2010 04:27 PM, Daniel Stone wrote:
> > On Fri, Nov 19, 2010 at 01:52:39PM -0500, Chase Douglas wrote:
> >> A touch event is not delivered according to the device hierarchy. All touch
> >> -events are sent only through their originating slave devices.
> >> +events are sent only through their originating slave devices. However,
> >> +dependent touch devices will only emit touch events if they are attached to a
> >> +master device. This is due to the touch delivery being dependent on the
> >> +location of a cursor.
> >
> > I find this fairly worrying. The main reason not to send touch events
> > through MDs is that it would necessarily cause a storm of
> > DeviceChangedEvents. However, we can handwave this away in the spec by
> > avoiding listing touch classes on MDs, and stating (as my original
> > revision did) that touch capabilities must be taken from the SD, as
> > given in sourceid of the event.
> >
> > The reason this concerns me is that it creates (even more) divergent
> > event delivery paths for touch events vs. normal events. This is a pain
> > in and of itself when trying to understand event flow, which can be
> > difficult at the best of times, but especially if we're going to be
> > generating synthesised pointer events from touch events. This would
> > mean that the same touch would be generating two events which go down
> > completely separate delivery paths. The worst case here is that one
> > touch causes two clients to react and do two different things: this
> > would be bad bad bad bad bad.
> >
> > So I'd be much happier if touch events were also delivered through the
> > MDs as well.
>
> I am ok with sending events through the MD if the DCEs are not generated
> for touch events and MDs do not copy touch axes. I think that alleviates
> the potential performance impact.
>
> Originally, I had missed that XI 2 device events included the device id
> and the source id, so I thought DCEs were required to determine which
> device generated an event. Now that I've got that straight I feel better
> about this approach.
>
> [...]
>
> >> +Appendix B: Known Missing Features
> >> +
> >> +??? Any form of grabbing or grab-like semantics for touch events
> >
> > Except that we do already have grab-like semantics: the proposed
> > delivery mechanism is to start at the root window and work its way down
> > to the deepest child, with delivery progressing as clients express
> > disinterest in the event. IOW, exactly like grabs, except that all
> > clients get deliveries, but all but one are told not to act on them.
> >
> > Since this matches our existing grab semantics so closely, I've proposed
> > to Chase that touch delivery act exactly as normal event delivery does
> > today: start with a list of grabs, going root-to-child, and then when
> > that's exhausted, work your way through a list of normal selections,
> > going child-to-root.
> >
> > Chase's proposed usecases are highly WM-centric, anticipating that the
> > WM (or an external gesture recogniser) does global gesture recognition
> > (e.g. 'this is a pinch action'), and informs clients out-of-band. My
> > proposed usecases are all quite app-centric; I still believe the apps
> > can use a common library to do this, and that there's no need for
> > another client to get involved, which adds at least one roundtrip.
> >
> > So, his plans are best served by root-to-child delivery, whereas mine
> > are best served by child-to-root. In the best tradition of (UNI)X, I
> > suggest we do both, not just to keep everyone happy, but because it's a
> > 1:1 match for the input delivery semantics we have today.
> >
> > The one thing that still concerns me here is promiscuous event sending:
> > where every client that has selected for the events receives them
> > whether it wants to or not. The reason given for this is to enable
> > low-latency fallthrough, so that if the WM has a touch grab and decides
> > it doesn't want the touch events, the client doesn't have to round-trip
> > to the server to get a potentially huge buffer of all the touch data.
> >
> > This is fine in theory, and I'm all for avoiding the roundtrips, but I
> > do worry that we've replaced one problem (buffering the touch data,
> > which may be huge, in the X server), with several problems (buffering
> > the touch data, which may be huge, in n clients). Since a client would
> > be able to declare disinterest in a touch stream and pass it on to the
> > next client at any time, every client would have to buffer every touch
> > stream, and be ready to act on it.
>
> I think there are some non-obvious benefits to this approach as well.
> For example, if a client cares about touch state rather than touch path,
> it would not need to buffer the data as it comes in. Think of a drag and
> drop operation. When a touch begins, it determines what object was
> selected. While the client doesn't own the touch, it just keeps track of
> the latest touch position as events come in. Once the client owns the
> touch, it drags the object to the last position seen. Thus, no buffering
> of the touch event stream is necessary.
>
> If no clients need touch path data, the touch path will not be buffered
> anywhere, saving system resources. For clients that require path data,
> they can receive events and store them internally in whatever form they
> need, which may also save system resources.
>
> Thus, I think the only downside of this approach is the potential for
> wakeup storms:
>
> > Chase and I talked quickly about hints for this: clients being able to
> > say 'please do not send me any more events from this touch stream', for
> > cases like a global gesture recogniser that has decided it sees nothing
> > of use to it, as well as the corresponding 'please do not send any other
> > clients any more events from this touch stream', for when a client has
> > decided that the touch stream is meaningful to it, and that it won't
> > pass it on. This would pretty much solve my concerns, except that it's
> > an irritating burden for app developers, and would probably be
> > reasonably difficult to get correct. The penalty for forgetting to do
> > it, or getting it wrong, would be waking up every app with a touch
> > selection in the window trace every time you have an event, as well as
> > making them copy in the touch data, etc.
>
> A few points I'd like to make:
>
> 1. Most MT apps really just want gestures like pinch to zoom or pan to
> scroll.
Where is this assumption coming from?
Whenever I get to play with an iPhone (for MT user interface
research only, of course ;), I see plenty of MT input that are not any of
these gestures. Admittedly, most of the apps I've tried lately were games,
but the assumption that "most MT apps just want gestures" is dangerous and
potentially limiting.
AFAICT, we don't know yet, at least not on the free desktop. We don't have
the number of MT apps to even start talking of "most". Remember, technically
you can claim 2 out of 3 is "most" but 3 apps is not a useful sample.
Cheers,
Peter
> If these apps subscribe to a system-wide gesture recognizer,
> they won't need to subscribe to MT through X. This would allow them to
> have the MT support they want without being awakened on every MT event.
>
> 2. MT apps that want to do atypical gesture processing or raw MT
> handling, like a drawing app, are the types of apps that could best use
> tentative events. They can begin gesture processing before they own the
> touch, or they could start drawing to provide some feedback.
>
> So I think it boils down to: normal clients should not be affected
> adversely, and more complex MT clients will want this flexibility
> anyways. I haven't been able to come up with many potential apps that
> would bridge this gap, potentially having wakeup storms for no benefit
> while not being able to effectively use tentative events for immediate
> feedback or processing.
More information about the xorg-devel
mailing list