Getting to a GL based X server

Jon Smirl jonsmirl at gmail.com
Thu May 26 16:54:09 PDT 2005


On 5/26/05, Adam Jackson <ajax at nwnk.net> wrote:
> On Thursday 26 May 2005 17:33, Jon Smirl wrote:
> > On 5/26/05, Adam Jackson <ajax at nwnk.net> wrote:
> > > If you're really arguing that every server, even those running on chips
> > > where we have no hardware 3D support, should be running on a GL engine,
> > > then I'll just stop listening now, because you're delusional.
> >
> > No one is taking away your current server. You are free to continue using
> > it.
> 
> But I'm not free to continue improving it?

It's open source, you're free to print it on toilet paper if you feel
so inclined.

> 
> > I might point out that OpenGL-ES is a limited subset of OpenGL and it
> > has been designed for embedded use from day one. There are several
> > proprietary implementations including one that does not assume
> > hardware acceleration or use floating point and fits in 100K of
> > memory.
> 
> And you assume this will be adequately performant for the desktop because...

Desktop would use current DRI drivers or proprietary stacks from
NVidia/ATI. There can be many different OpenGL implementation running
under the Xgl server.

> > In the new model OpenGL/EGL is the device driver layer. This lets the
> > Xserver get completely out of the device driver business.
> 
> This is misleading; you have merely shifted the driver problem out of
> programs/Xserver.  The drivers still need to get written.  And the list of
> DRI drivers is far too short.

It has shifted the driver driver problem onto something that is
standardized and much more widely available than a KAA based system.

It has also raised the API layer of the driver to a much higher level.
This allows hardware to continue integrating functions. At the rate
things are going we may have hardware that can execute all of OpenGL
on the GPU in a few years.

It also makes it easier on companies like Nvidia/ATI to share a common
OpenGL code based between Windows and Linux.

> > Committing to the this driver model allows us to concentrate our resources
> > instead of trying to build three or more competing models.
> 
> I don't count three.  Do you count three?

XAA, KAA, OpenGL

> 
> > There will definitely be a transition period between the old and new
> > models. The first version of the Xegl server works on Linux
> > framebuffer making it very portable.
> 
> This is a delightful interpretation of the word "portable" I was not
> previously aware of.

The current Xegl only relies on framebuffer. That's about as low of
common denominator as you can get for graphics hardware. It shouldn't
be too hard to bring Xegl up on Solaris. Framebuffer Xegl is just a
demo, obviously you would use hardware acelleration in a production
system.

> 
> > And you're always free to continue using the existing Xserver.
> 
> You've dodged the question.  Why are you even bringing up GLES in the context
> of Xegl?

Because you started off talking about chips with no acceleration
support. Chips with no acceleration are primarily in the maket
targeted by OpenGL-ES.

> 
> ---
> 
> If you're trying to make a performance argument, fine.  I don't think anyone
> is questioning that the 3D pipeline has capabilities that we should be
> exploiting.  If you're trying to make a size argument, you're on shakier
> ground.  Size is really not the issue.  Suitable architecture for the
> hardware is; and the GLES implementations that count (read: the free ones)
> generally don't lend themselves to hardware acceleration.
> 
> But you seem to be making a manpower argument along the lines of "if we don't
> have everybody working on this yesterday then the terrorists win".  And I
> would humbly suggest that the solution there is not to herd cats towards your
> goal of choice, but rather to get more people working on X.

There are Xserver terrorists???? I better pull my network plug right now!

> Let's take a concrete example.  Say I'm want to improve i128 support.  Now, it
> has a 3D engine that's good enough for GL, so it can accelerate Render no
> problem.  My options are:
> 
> a) Spend a week or so on converting it to something KAA-like, then do DRI; or
> b) Spend three months getting DRI working under XAA.

You aren't thinking like a proprietary chip vendor. Chip vendors have
to do two drivers: Windows D3D and Windows OpenGL. Everything else is
optional.

Nvidia already ignores DRI and simply ports their Windows OpenGL
driver to Linux. I'm trying to formalize this process. The new model
should result in us getting more support from these vendors. It will
be closed source but at least we will have fully functional drivers.

Don't cry about it not being open source. They aren't giving us the
chip specs to write an open source driver anyway.

> 
> Which one gets results faster?  Which one provides a graduated path that gets
> me both value now and more value later?  Which one teaches me as I'm going,
> making me a more effective X hacker?
> 
> You're saying I should pick option B, and I'm having trouble seeing how that's
> a win for me or for anyone else.  You get a DRI driver in three months either
> way.
> 
> - ajax
> 
> 
> 


-- 
Jon Smirl
jonsmirl at gmail.com



More information about the xorg mailing list