Linux OpenGL ABI discussion

Ian Romanick idr at
Thu Sep 29 13:05:55 PDT 2005

Hash: SHA1

(I corrected the CC address for the lsb-desktop list.  It was
incorrectly listed as being at, so none of this
thread has made it to the list where the discussion should be.)

Allen Akin wrote:
> On Thu, Sep 29, 2005 at 01:54:00PM -0400, Adam Jackson wrote:
> | The deeper issue here is whether it's actually useful to require some minimum 
> | level of functionality even when large swaths of it will be software.  If I 
> | don't have cube map support in hardware, do I really want to try it in 
> | software?  Is that a useful experience for developers or for users?
> For OpenGL at least, history suggests the answer is usually "yes."  The
> argument goes back to the pre-1.0 days, when texture mapping was only
> available on fairly exotic hardware.  The decision was made to require
> it in the standard, and it turned out to be valuable on pure software
> implementations because (1) it was fast enough to be usable for a
> surprisingly large range of apps; (2) people with older hardware still
> had the option to use it, rather than having that option closed off
> up-front by the people defining the standard, and they found uses that
> were worthwhile; (3) development could occur on older hardware for
> deployment on newer hardware; (4) it served as a reference for hardware
> implementations and a debugging tool for apps.
> This experience was repeated with a number of other features as OpenGL
> evolved.
> If there's no consensus in the ARB about the desirability of a given
> piece of functionality, it tends to be standardized as an extension (or
> very rarely as a subset, like the Imaging Operations).  Extensions are
> optional, so they provide middle ground.  But eventually, if a piece of
> functionality proves valuable enough to achieve consensus, it moves into
> the OpenGL core and software implementations become mandatory.

This represents a goal of OpenGL to lead the hardware.  The idea is that
most current version of OpenGL defines the features that the next
generation of hardware will have standard.  In terms of making
functionality available and leading developers, this is a really good
strategy to take.

However, that's not (or at least shouldn't be) our goal.  Our goal is to
define the minimum that is required to be available on our platform.  As
such, that should reflect what actually exists on our platform.  From
talking to people at the various distros, the most common piece of
graphics hardware is the Intel i830 chipset (and derived chips like
i845G, i855GM, etc.).  That hardware is only capable of OpenGL 1.3.

If all applications were well behaved (i.e., allowed users to enable or
disable the use of individual hardware features like DOT3 texture
environment or shadow maps), this wouldn't be a problem.  That is sadly
not the case.

I think there is an alternative middle ground here that will satisfy
most people's concerns.  I propose that we require 1.2 as the minimum
supported version.  I also propose that we provide a standard mechanism
to "demand" that the driver advertise a user specified version up to
1.5.  For example, a user might run an app like:


When 'a.out' queries the version string, it will get 1.5 even if the
driver has to do software fallbacks for the new 1.5 functionality.

This will prevent the unexpected performance cliff I mentioned in
another e-mail, and it will still provide more modern functionality to
users that need / want it.
Version: GnuPG v1.2.6 (GNU/Linux)


More information about the xorg mailing list