Input thread on EVoC

Vignatti Tiago (Nokia-D/Helsinki) tiago.vignatti at nokia.com
Wed Jun 9 03:36:59 PDT 2010


Oi,

On Wed, Jun 09, 2010 at 02:37:22AM +0200, ext Fernando Carrijo wrote:
> Tiago Vignatti <tiago.vignatti at nokia.com> wrote:
> > 
> > it's not that straightforward given, as the guys said already, X event
> > dequeuing is very tied with clients and the server may spend considerable
> > amount of time doing the locking/unlocking dance only.
> 
> But it is worth trying, right?

I guess no one is sure whether lock the client output buffer is a good idea or
not. It will depend exactly the amount of code we'll be locking there, and in
the end if we are able to have always a low lock contention between the
threads. To have a fine granularity within code here is the hint.

Honestly, I may be telling you a dumb approach here, but I think the best way
to see if it's worth or not is just going and implement. Hard to say.

 
> Agreed. I have rebased your input thread code upon a local branch based on
> xserver master and as soon as I figure out how to solve some issues with the
> s3virge driver which serves me at home, I will start benchmarking the X input
> subsystem. Some featureful tools come to my mind, like x11perf and perf itself,
> but if you know about anything more appropriate, please enlighten me.

be sure you can have both sw and hw cursor available with this s3virge driver.
Also a SMP machine is desirable to see significant improvements probably.

For the tools, I can think in a simple program that keeps moving the cursor
while at the same moment tries to draw things on screen. x11perf + xtest
probably do the trick for you. But I'm not that sure xtest will follow exactly
the same input paths...


> I fear I couldn't parse what you said above. When you talk about the lack of
> predictability, isnt it a natural consequence of us relinquishing the burden of
> process scheduling and caring only about client scheduling? Maybe you implied
> that it is important for us to offer correcteness of execution by having some
> control on thread scheduling?

when myself and Daniel designed the input thread for the event generation, we
thought that such thread would have a small footprint which would be always be
kept in the memory when the device is moving. And whenever needed the CPU
would schedule such thread, which in turn would do the work instantly. However
that wasn't the case in practice. We couldn't see any apparent improvement and 
sometimes I had the impression that was performing worst. Sigh. The reason
wasn't so clear on that time whether the problem was in fact with the kernel
scheduler (taking it off and not prioritizing its execution on a needed time)
or the thread going swap to disk. I'd guess the later.

 
> I didn't even try anything like this before, but if I lived in the desert whith
> no one else to ask, mlocking would be my first try. Why do people refrain from
> using things like __atribute__(__section__("input_thread_related")) and some
> linker trickery, à la ld scripts, to put ELF sections into well known virtual
> memory addresses? Lack of portability is the cause, isn't it?

Actually I tried exactly this. But I'm not sure if this is the best and more
beautiful way though. As I said on the paragraph before, first we have to be
sure memory (the thread swap back do disk) is the problem here. If this is the
case then we can start to work in ways to solve it.

 
> > Hard to say. But definitely start to chop of parts and thread them is one way
> > to figure it out :)
> 
> Yes. Peter said the same.
> 
> To be honest, right now I'm prone to doing this out of EVoC, since it seems that
> the board expects some guarantees, specially related to timeline, which I cannot
> afford. The reason being that, as I said privately before, I have all the time
> in the world, but unfortunatelly not all expertise. Either way, I'm really
> really really keen to start exploring and coding, in or out of EVoC. :)

Okay, I'd say to just send the proposal to the board and let's see the
feedback there now. 

G'luck! :)

             Tiago


More information about the xorg-devel mailing list