Some of my thoughts on input for wayland

Chase Douglas chase.douglas at canonical.com
Mon Jan 24 15:51:33 PST 2011


On 01/24/2011 05:31 PM, Kristian Høgsberg wrote:
> 2011/1/24 Chase Douglas <chase.douglas at canonical.com>:
>> I'm not advocating a free for all when it comes to input systems where
>> you pick and choose what you want. I think we should strive for an input
>> system to be extended rather than rewritten from scratch as much as
>> possible. Maybe we'll get lucky and never have to rewrite the input
>> system again :). However, every decade or so it seems we need to extend
>> input in ways that break backwards compatibility in the protocol. So
>> essentially, my argument can be boiled down to: I don't think we should
>> explicitly specify a "Wayland" input protocol. Let the input side be
>> provided through extensions, and perhaps ordain a specific extension or
>> set of extensions as the canonical input system at any given time.
> 
> What you describe here is basically how Wayland works.  As I said
> above, the only fixed interface in the Wayland protocol is how to
> discover other interfaces.  When you connect to the compositor, you
> will receive a series of events, each event introduces an available
> object by giving you its id, interface name and version.  Right now,
> one of these interfaces is "input_device", you can see the details
> here:
> 
>   http://cgit.freedesktop.org/wayland/tree/protocol/wayland.xml#n386
> 
> The "input_device" interface describes the entire input protocol as it
> is now.  Obviously, there's work to do, right now it's sort of like
> core input + mpx.  But the point is, we can phase this out in favour
> of "input_device2", which can completely replace the "input_device"
> interface.  Then we can keep "input_device" around for a few years
> while we port the world to "input_device2" and then eventually dump
> it.  If X had let us phase out core fonts, core rendering and core
> input as extensions, I think X would have lasted even longer.  It was
> one of the mistakes in X I didn't want to carry over.
> 
> That said, we need to have an input protocol for Wayland 1.0 (or
> whenever it's "ready").  I don't want it to be an out-of-tree project
> or external protocol, I want it to be there as one of the interfaces
> in the Wayland protocol, for applications to rely on.

Ok, I think that satisfies the major issue I saw.

>> Second, splitting input into a separate thread or process. We are
>> hitting the serialization challenge with gestures today. We need to be
>> able to analyze multitouch input and determine gestures, but this is
>> dependent on where the touches fall within regions on the screen. There
>> may be two separate compositing windows that want to know about gestures
>> at the same time. Think of two documents open side by side.
>>
>> As we recognize gestures, we must map them to the windows on screen. If
>> the windows move, we have to keep track of that. We are very limited on
>> how we get both of these pieces of data in X. This has forced us to go
>> completely serial in approach. We begin to worry about what performance
>> impacts there will be on the window manager or the window server.
>> However, if we keep the window hierarchy in shared memory with
>> appropriate IPC mechanisms, we can minimize serialization.
> 
> I think it could be feasible to split the gesture recognition out into
> a separate thread, but that's really an implementation decision for a
> given compositor.

As long as it's allowed then I'm happy. We would need to develop some
synchronization mechanisms, but that's just another implementation decision.

Thanks for following up; I'm glad to see where wayland is heading :).

-- Chase


More information about the wayland-devel mailing list