[Mesa-dev] Mesa/Gallium overall design

Corbin Simpson mostawesomedude at gmail.com
Mon Apr 12 00:41:36 PDT 2010


On Mon, Apr 12, 2010 at 12:04 AM, Luca Barbieri <luca at luca-barbieri.com> wrote:
> Well, there are a lot of things that Gallium doesn't do well compared
> to other APIs, mostly OpenGL:
> 1. Support for fixed function cards in some way, either:
> 1a. (worse) New Gallium interfaces to pass fixed function pipeline
> states, along with an auxiliary module to turn them into shaders
> 1b. (better) An auxiliary module doing magic with LLVM to fit shaders
> into the fixed function pipeline

No. One of the entire design goals of Gallium is to provide a
shaderful pipeline. If you wanna do it with register combiners, you
could try, but frankly we've already talked this over and decided to
not walk that plank.

> 2. Support for setting individual states instead of full state
> objects, for APIs like OpenGL where that works better

State is collated. Are there really apps (or even serious use cases)
where state is constantly in flux like this?

> 3. Immediate vertex submission

Already addressed this. Doing it in-driver for the HW that supports it
isn't that tough.

> 4. More powerful and better defined clear interface with scissor/MRT support

I'm not sure how scissors fit in, other than that you probably have to
hax them on your HW to work with clears, but this isn't really a
problem any longer. If you want to involve e.g. MRTs in your clears,
patch util_clear to do it. Also how is this a GL thing?

> 5. Perhaps in theory more powerful 2D interfaces (e.g. format
> conversion, stretching, color masks, ROPs, Porter-Duff, etc), emulated
> over 3D by the blitter module, to better implement 2D apis

We talked several times of a new pipe interface for this stuff. For a
majority of chipsets, the features you listed all require a 3D engine,
but that doesn't preclude a new pipe built on pipe_context. I guess
use cases would be nice before we go down this path; the only consumer
of all these that I can think of is Xorg, and we've already got that
covered.

> 6. Better handling of shader linkage (I have some work on this)

Is the link-on-render semantic not strong enough? I remember last time
that your grievances were largely pointed at Mesa and GLSL; do we
really need Gallium features for this?

On the other hand, Gallium should be permitted to fail shader
compiles; most APIs permit this in one way or another.

> 7. Some broadly used and *good* way of managing graphics memory (e.g.
> pipebuffer improved and widely adopted)

Um. I'm probably opening a can of worms here, but this has nothing to
do with GL.

> 8. Conditionals/predicates in TGSI (for NV_*_program)

Hm, I could have sworn we have all the useful conditionals. I know
that some instructions were removed, but they were largely useless or
redundant.

> 9. Half float registers and precision specification in TGSI (for NV_*_program)

I think this should go under a general conformance vs. performance vs.
quality switch.

> 10. Maybe an interface to explicitly set constants instead of dealing
> with constant buffers (not sure about this, perhaps constant buffers
> are fine everywhere)

We talked about this already. Constant buffers aren't ideal on
transitional hardware, but they work fine.

> Of course there are also the missing features that DirectX 10/11 has, like:
> 1. Mipmap generation

SGIS_generate_mipmap isn't good enough? It's already implemented in
Mesa. So only D3D 10+ trackers would benefit from this, and they
already have it implemented.

> 2. Stream out/transform feedback and DrawAuto

GL_FEEDBACK modes? I'm not sure if any APIs have them all in a style
that can be unified.

> 3. Support for creating display lists, i.e. having a CSO representing
> an hardware pushbuffer (DX11 has that)

Not sure if want, but that's okay; D3D 11 is far in the future to us. :3

> 4. Compute shaders

We'll talk about this later. Suffice it to say that we more or less
all agree pipe_context isn't good for this.

> 5. Tessellation

Geom shaders are already half-implemented, aren't they? You'd have to
ask Zack about that, but ISTR that he's got them working.

> 6. Multisampling, including alpha-to-coverage and all
> hardware-specific tricks like CSAA

Too much hardware-specific stuff in there. SSAA (straight-up
supersampling) should be possible right now, but I think the current
compromise of a single bit to request *some kind* of multisampled
buffer is fine.

> 7. 2D texture arrays

I have no idea what these are.

> 8. Texture sampling in geometry shaders

Wait, we can do that? Wicked. That'll be fun.

> 9. Indirect instanced drawing (see DX11 DrawInstancedIndirect)

Ugh, moar D3D 11? Let's catch up to D3D 10 first.

> 10. DX11 shader interfaces

These differ significantly from what we've got? I don't know D3D 11 yet.

> 11. Selection of viewports/render target in 2D texture array from the
> geometry shader

I'm seeing a pattern here.

> 12. More TGSI instructions (pointer load/stores, fp64, atomic ops,
> shared memory, etc.)

This isn't happening on the current generations of supported hardware,
and it'll likely be delayed for a bit on newer stuff.

> There are also likely many other that didn't come to mind immediately.
>
>> I wasn't trying to be antagonistic, but Gallium is supposed to be a
>> "common interface" and "abstraction," so features specific to one
>> chipset are invariably going to fall by the wayside. For example, r500
>> Radeons (and maybe r400?) have a filter4 kernel that they could use to
>> implement e.g. fast Lanczos sinc on textures, but the relevant Gallium
>> state was never used or tested and I think it's been nuked.
>
> Well, I don't think Gallium should be the intersection of hardware/API
> features, but rather the union of all hardware/API features.
> Otherwise, obviously, it's going to suck on all cards and for all APIs.

I think "it's going to suck on all cards and for all APIs" is a
constant unchanged by the whims of driver developers and hardware
manufacturers.

> So if Radeon can do Lanczos sampling/rescaling and there is an OpenGL
> extensions that exposes that, I think Gallium should be extended to
> allow implementing the OpenGL extension over it and allow implementing
> the driver portions .
>
> Note that it is generally possible to do this intelligently, so that
> cards and APIs not having the feature are not imposed an undue burden
> due to it.
> The usual way is to either add a capability bit, and/or provide an
> auxiliary module emulating the special functionality over more common
> one.
>
> For instance, for the vertex submission problem, you could just
> introduce a Gallium-level module like the Mesa VBO module, that would
> allow non-nVidia drivers to only need to have a few lines of code
> added to enable it (with the settings most appropriate for them).
>
> Of course this may not get done due to not being worth the time to
> implement it, but that's a different issue.

No, that's the entire point. If we had the time to implement things,
we wouldn't still be in the GL 2.x era.

> BTW, for instance, I sent a patch to change the Gallium sampler state
> to support nearest-neighbor anisotropic filtering on nVidia cards (by
> removing ANISO as a special filter), and it was merged some time ago,
> so it seems this kind of thing is possible...

I'm gonna point you to a discussion we had several weeks ago about
GLSL linking, in which it was opined that some nVidia hardware lacked
programmable swizzles and routing tables for linking shaders,
requiring shader compilers to be augmented with linking and selection
code to properly match outputs and inputs across shaders. Was a
Gallium-level module implemented to perform the desired shader
modifications, or was it done privately in the driver? Was the Gallium
API changed as a result of the discussion?

There's a balance here between common features and common needs.

-- 
When the facts change, I change my mind. What do you do, sir? ~ Keynes

Corbin Simpson
<MostAwesomeDude at gmail.com>


More information about the mesa-dev mailing list