Dispatching and scheduling--basic questions
ajax at nwnk.net
Tue Sep 16 07:10:20 PDT 2008
On Mon, 2008-09-15 at 19:34 -0400, Peter Harris wrote:
> William Tracy wrote:
> > In response to Adam and Tiago's emails, I'm looking at the dispatching
> > and scheduling code, respectively. The most relevant file *seems* to
> > be xserver/dix/dispatch.c, though grep pulls up some stuff under
> > xserver/Xext and xserver/Xi. Any other relevant code I'm missing?
> xserver/dix/dispatch.c is the main file, yes. xserver/os/WaitFor.c also
> springs to mind.
Yeah, the magic happens here in dix/dispatch.c:
result = XaceHookDispatch(client, MAJOROP);
if (result == Success)
result = (* client->requestVector[MAJOROP])(client);
requestVector is the big function table. The core requests occupy the
bottom 128 slots or so. Extension slots are dynamically allocated at
server initialization, each extension gets one 'major opcode' that
dispatches to something like ProcShapeDispatch that handles the 'minor
> > I have to ask, though: Why does Xorg even have its own scheduler?
> > Unless I am completely misunderstanding its purpose, it seems like
> > this should be pushed off to the OS via pthread. Is this a way of
> > sidestepping concurrency issues by forcing everything to run in one
> > thread at the kernel level?
> For starters, X11 (released 1987, if you ignore X10 and predecessors)
> has been around rather longer than pthreads (POSIX 1003.1c-1995).
> More to the point, every time I think about trying to thread the server,
> I stall with one of two designs. Either one big lock that all the
> threads are blocking on (and then you have to write a lock scheduler
> anyway), or a bazillion little locks, and common ops like "Grab Server"
> or "Configure Window" having to take them all (which sounds slow). Even
> with many small locks, I suspect you'd still have most (non-idle)
> threads blocking on the graphics driver lock much of the time.
> I haven't spent all that much time thinking about a threaded server,
> however. Maybe you can come up with a better design.
People have done threaded X servers. There was actually a concerted
effort around this during the old X Consortium days. It was called MTX,
and it was... not a huge success. I've mirrored the design docs from it
At one point, evince wouldn't render them properly until you fed them
through ps2pdf first. Maybe that's fixed now. It is, however, totally
worthwhile reading. It's one of the most thorough breakdowns of the X
object model and how requests interact with objects that I've ever seen.
The reason why it wasn't a smashing success is pretty straightforward.
The vast majority of X operation is simply not CPU bound on reasonable
hardware, by which is meant, graphics hardware with acceleration. The
thing that takes time is software rendering, so the best you can do with
an MTX-like threading model is scale software rendering linearly with
the number of cores. If you do this, you do it at the expense of
putting a lock around pretty much every protocol object, and the code
complexity from that is far out of proportion to the rendering
performance win, since every[*] computer since 1994 has sold with _some_
That said, we have seen some cases where threading would be a real win.
Moving input to a thread for latency reasons looks like it's definitely
worthwhile. Some hardware operations like DDC are slow out of
proportion to the rest, and might be worth executing asynchronously.
Fortunately the dispatch code is equipped to handle this; we have the
ability add things to idle work queues, timers, the ability to put
clients to sleep for a while and complete their requests once the async
task finishes, etc. But from a strict performance standpoint, threading
just isn't a win. Anything the X server's doing that takes material CPU
time is simply a bug.
[*] Except embedded stuff, but how often is that both multicore _and_
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 197 bytes
Desc: This is a digitally signed message part
More information about the xorg