Getting to a GL based X server
Adam Jackson
ajax at nwnk.net
Thu May 26 15:28:38 PDT 2005
On Thursday 26 May 2005 17:33, Jon Smirl wrote:
> On 5/26/05, Adam Jackson <ajax at nwnk.net> wrote:
> > If you're really arguing that every server, even those running on chips
> > where we have no hardware 3D support, should be running on a GL engine,
> > then I'll just stop listening now, because you're delusional.
>
> No one is taking away your current server. You are free to continue using
> it.
But I'm not free to continue improving it?
> I might point out that OpenGL-ES is a limited subset of OpenGL and it
> has been designed for embedded use from day one. There are several
> proprietary implementations including one that does not assume
> hardware acceleration or use floating point and fits in 100K of
> memory.
And you assume this will be adequately performant for the desktop because...
> In the new model OpenGL/EGL is the device driver layer. This lets the
> Xserver get completely out of the device driver business.
This is misleading; you have merely shifted the driver problem out of
programs/Xserver. The drivers still need to get written. And the list of
DRI drivers is far too short.
> Committing to the this driver model allows us to concentrate our resources
> instead of trying to build three or more competing models.
I don't count three. Do you count three?
> There will definitely be a transition period between the old and new
> models. The first version of the Xegl server works on Linux
> framebuffer making it very portable.
This is a delightful interpretation of the word "portable" I was not
previously aware of.
> And you're always free to continue using the existing Xserver.
You've dodged the question. Why are you even bringing up GLES in the context
of Xegl?
---
If you're trying to make a performance argument, fine. I don't think anyone
is questioning that the 3D pipeline has capabilities that we should be
exploiting. If you're trying to make a size argument, you're on shakier
ground. Size is really not the issue. Suitable architecture for the
hardware is; and the GLES implementations that count (read: the free ones)
generally don't lend themselves to hardware acceleration.
But you seem to be making a manpower argument along the lines of "if we don't
have everybody working on this yesterday then the terrorists win". And I
would humbly suggest that the solution there is not to herd cats towards your
goal of choice, but rather to get more people working on X.
Let's take a concrete example. Say I'm want to improve i128 support. Now, it
has a 3D engine that's good enough for GL, so it can accelerate Render no
problem. My options are:
a) Spend a week or so on converting it to something KAA-like, then do DRI; or
b) Spend three months getting DRI working under XAA.
Which one gets results faster? Which one provides a graduated path that gets
me both value now and more value later? Which one teaches me as I'm going,
making me a more effective X hacker?
You're saying I should pick option B, and I'm having trouble seeing how that's
a win for me or for anyone else. You get a DRI driver in three months either
way.
- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg/attachments/20050526/fa793eae/attachment.pgp>
More information about the xorg
mailing list