[Mesa-dev] Mesa/Gallium overall design

Luca Barbieri luca at luca-barbieri.com
Tue Apr 13 04:47:16 PDT 2010


>> As I said SVGA doesn't count its not real hw, it relies on much more
>> stable host drivers yes, and is a great test platform for running DX
>> conformance, but you cannot use it as a parallel to real hardware.
>
> Why not? It looks like a GPU. It acts like a GPU. (Maybe it even smells
> like a GPU? :) It must be a GPU.
>
> I agree a showcase driver for real hardware would be preferable, but the
> above seems like an unfair dismissal of the svga driver.

The problem is that svga does not address the issue of whether the
performance of ad-hoc proprietary OpenGL drivers (nvidia and fglrx)
can be matched with Gallium, unless svga manages to achieve very close
to "bare hardware" performance.

How fast is svga with OpenGL on a Linux guest versus native OpenGL
with the nVidia proprietary drivers? (on released VMware products, so
that it is public information)

Clearly performance is the only issue there: anything else can be
solved by just extending Gallium, but if you discover that a major
portion of CPU time is going to translating OpenGL to Gallium, this
might a require a complex massive refactoring of everything to fix it.

Right now much bigger problems (e.g. memory management) generally make
it impossible to tell by profiling on any hardware, but at some point
this will become clear, possibly with disappointing realizations.

The nv50 driver might be reaching this point, and an attempt was made
to also write a classic mesa driver to compare their performance, but
that effort seems to have been abandoned before it could produce such
information.
CCing Cristoph Bumilller for this.

If you look at Mesa and the Mesa Gallium state tracker from the
perspective of minimizing CPU cycles and cache misses spent in the
drivers, you will likely by struck by the sheer amount of inefficiency
here due to all the useless conversions here wasting CPU time, and the
unnecessary proliferation of objects, some large, in memory causing
all the obvious allocation/cache behavior issue.

And if you read what nVidia has to say on the topic, at
http://developer.nvidia.com/object/bindless_graphics.html, you'll
realize how the Gallium design does not take such concerns in much
regard (except for the idea of using CSOs)

Whether this is relevant or not is unclear, but it is the real concern IMHO.
That will still be fixable, but would require a much more significant
willingness to refactor and rewrite things, and in particular I doubt
the Mesa data structures that classic drivers need could be supported
through this.
Unless of course, one makes the whole issue moot by exposing a
different API than OpenGL, such as DirectX 10, which fits Gallium much
better, but that is an even bigger overall shift in direction.


More information about the mesa-dev mailing list