[Mesa-dev] Mesa/Gallium overall design

Luca Barbieri luca at luca-barbieri.com
Sun Apr 11 18:53:40 PDT 2010


> include this deprecated GL feature

While it is deprecated by Khronos, I'm not sure it will every go away.
nVidia explicitly states they have no intention to drop the
compatibility profile and even intend to keep it performing optimally.
While I couldn't find any statement from ATI, it seems unlikely they
would drop it as it would risk send a segment of the market to the
competition.

Since drivers are likely going to continue supporting it, applications
will tend to use it too.

Thus. it's not obvious it will ever go away. I'd guess it never will.

While hard to implement, it's actually a very convenient API for the users.
With the OpenGL compatibility subset, it's easy to, say, draw a
Gouraud shaded triangle.
With DirectX or non-compatibilty OpenGL you have to write shaders,
setup CSOs and setup vertex buffers just for that simple task.

> as part of gallium, for instance --
> much more likely would be to put some effort into optimizing the VBO
> module, or creating a gallium-specific version of that code.

An option could be to add the interface to Gallium, but also provide
an auxiliary module to implement in the terms of the rest of it.
This would basically result in moving the VBO module to Gallium, and
wouldn't have any adverse effects on its usability.

> If you were absolutely committed to making use of this hardware
> feature, one option might be to use the high vantage point of the
> target/ directory, and allow that stack-constructing code to have
> meaningful things to say about how to assemble the components above
> the gallium driver level.  For instance, the nvidia-gl targets could
> swap in some nv-aware VBO module replacement, which was capable of
> talking to hardware and interacting somehow with the nv gallium
> implementation.

This is an option, but I'm not sure it is really ideal.
In general, this is a broader problem, and also involves the fixed
function support.

Several cards have features that are not in the current Gallium model
nor the DirectX 10/GL3/GL4 one, and the classic Mesa interfaces are
being kept to support them, as opposed to extending Gallium with them.
This tends to result in a codebase where Gallium is grafted on the
OpenGL implementation, as opposed to the OpenGL implementation being
built around Gallium.

Of course, this is how historically Gallium began, but keeping the
classic interfaces makes it very hard to prevent refactoring in the
sense of better integrating Mesa and Gallium.

Going forward, a choice can be made between:
1. Dropping support for non-DirectX 10-like features and cards
2. Continuing to make those available via the classic Mesa interfaces,
keeping the "split" mesa + mesa/st codebase and maybe adding
"driver-specific state tracker extensions' like the one proposed
3. Making Gallium capable of supporting all hardware features and all
programming models (and porting all classic Mesa drivers to it)

Currently, (2) is being chosen, which is essentially the "status quo"
historically.
In my opinion, it would be worth considering switching to (3) instead,
and even (1) might be better than (2).

This would allow to actually have Gallium as the single interface for
all 3D hardware and APIs, making it possible to significantly
streamline Mesa around it, and making it cleaner and more efficient.
Also, all driver efforts would then be concentrated on Gallium, as
opposed to being split between it and classic Mesa, hopefully
resulting in it improving at a faster rate, and actually being the
"definitive" solution it is supposed to be, as opposed to current
"somewhat experimental" status.

Clearly this is a lot of work, and this may actually prove an
impediment to doing it, but I think in principle this is something to
be considered.

Some of this was already touched upon in the "Provide dri shared
library building and SDK installation" thread, but only tangentially.


More information about the mesa-dev mailing list