[Mesa-dev] [Mesa3d-dev] r300g: hack around issue with doom3 and 0 stride
Keith Whitwell
keith.whitwell at googlemail.com
Sun Apr 11 10:40:07 PDT 2010
On Sun, Apr 11, 2010 at 6:38 PM, Keith Whitwell
<keith.whitwell at googlemail.com> wrote:
> On Sun, Apr 11, 2010 at 9:33 AM, Luca Barbieri <luca at luca-barbieri.com> wrote:
>> Why?
>>
>> At least all nVidia cards directly support this, and it allows code like this:
>>
>> hw_set_vertex_attrib(idx, v)
>> {
>> write command to set vertex attrib on GPU fifo
>> write idx on GPU fifo
>> write v on GPU fifo
>> return;
>> }
>>
>> glColor()
>> {
>> pipe->set_vertex_attrib(COLOR, v);
>> return;
>> }
>>
>> Instead of this simple approach, we instead currently use the "vbo
>> module", which attempts to store all the GL attributes in a vertex
>> buffer, with all kinds of unnecessary complexity like having to resize
>> the buffer in the middle of a primitive because you just used another
>> vertex attribute, having to deal with memory allocation, vertex
>> element CSO hashing and so on.
>>
>> Of course this results in not-so-good performance, which could
>> otherwise be avoided with the approach described above (guess what the
>> binary drivers use).
>>
>> I have no idea whether Radeon or Intel GPUs support this, but it
>> doesn't seem unlikely since it is the basic OpenGL model.
>
> Nvidia's definitely the odd one out here. No other hardware I'm aware
> of has this behaviour -- though perhaps the old SGI workstations also
> worked this way.
>
> I think this falls into the general question of how to make use of
> special features a particular piece of hardware offers, without
> raising the interface to the level ceases to provide a meaningful
> intermediate abstraction. Right now our answer is that we don't try
> to, and
rendering goes through the layered interfaces, and the hardware
feature is ignored.
> There's zero likelihood that we'll suddenly decide to
> include this deprecated GL feature as part of gallium, for instance --
> much more likely would be to put some effort into optimizing the VBO
> module, or creating a gallium-specific version of that code.
>
> If you were absolutely committed to making use of this hardware
> feature, one option might be to use the high vantage point of the
> target/ directory, and allow that stack-constructing code to have
> meaningful things to say about how to assemble the components above
> the gallium driver level. For instance, the nvidia-gl targets could
> swap in some nv-aware VBO module replacement, which was capable of
> talking to hardware and interacting somehow with the nv gallium
> implementation.
>
> I'm not sure if that will be a net positive thing for the
> maintainability of the nv drivers, or whether the whole thing would
> collapse in an unmaintanable heap of cross dependencies and layering
> violations... Personally I'd be more interested in improving the VBO
> code.
>
> Keith
>
More information about the mesa-dev
mailing list