[Mesa-dev] [RFC] ARB_gl_spirv and NIR backend for radeonsi
Nicolai Hähnle
nhaehnle at gmail.com
Sun May 21 10:48:23 UTC 2017
Hi all,
I've been looking into ARB_gl_spirv for radeonsi. I don't fancy
re-inventing the ~8k LOC of src/compiler/spirv, and there's already a
perfectly fine SPIR-V -> NIR -> LLVM compiler pipeline in radv, so I
looked into re-using that.
It's not entirely straightforward because radeonsi and radv use
different "ABIs" for their shaders, i.e. prolog/epilog shader parts,
different user SGPR allocations, descriptor loads work differently
(obviously), and so on.
Still, it's possible to separate the ABI from the meat of the NIR ->
LLVM translation. So here goes...
The Step-by-Step Plan
=====================
1. Add an optional GLSL-to-NIR path (controlled by R600_DEBUG=nir) for
very simple VS-PS pipelines.
2. Add GL_ARB_gl_spirv support to Mesa and test it on simple VS-PS
pipelines.
3. Fill in all the rest:
3a. GL 4.x shader extensions (SSBOs, images, atomics, ...)
3b. Geometry and tessellation shaders
3c. Compute shaders
3d. Tests
I've started with step 1 and got basic GLSL 1.30-level vertex shaders
working via NIR. The code is here:
https://cgit.freedesktop.org/~nh/mesa/log/?h=nir
The basic approach is to introduce `struct ac_shader_abi' to capture the
differences between radeonsi and radv. In the end, the entry point for
NIR -> LLVM translation will simply be:
void ac_nir_translate(struct ac_llvm_context *ac,
struct ac_shader_abi *abi,
struct nir_shader *nir);
Setting up the LLVM function with its parameters is still considered
part of the driver.
Questions
=========
1. How do we get good test coverage?
------------------------------------
A natural candidate would be to add a SPIR-V execution mode for the
piglit shader_runner. That is, use build scripts to extract shaders from
shader_test files and feed them through glslang to get spv files, and
then load those from shader_runner if a `-spirv' flag is passed on the
command line.
This immediately runs into the difficulty that GL_ARB_gl_spirv wants SSO
linking semantics, and I'm pretty sure the majority of shader_test files
don't support that -- if only because they don't set a location on the
fragment shader color output.
Some ideas:
1. Add a GL_MESA_spirv_link_by_name extension
2. Have glslang add the locations for us (probably difficult because
glslang seems to be focused on one shader stage at a time.)
3. Hack something together in the shader_test-to-spv build scripts via
regular expressions (and now we have two problems? :-) )
4. Other ideas?
2. What's the Gallium interface?
--------------------------------
Specifically, does it pass SPIR-V or NIR?
I'm leaning towards NIR, because then specialization, mapping of uniform
locations, atomics, etc. can be done entirely in st/mesa.
On the other hand, Pierre Moreau's work passes SPIR-V directly. On the
third hand, it wouldn't be the first time that clover does things
differently.
3. NIR vs. TGSI
---------------
It is *not* a goal for this project to use NIR for normal GLSL shaders.
We'll keep the TGSI backend at least for now. But it makes sense to
think ahead.
A minor disadvantage of NIR is that the GLSL-to-NIR path is not as solid
as the GLSL-to-TGSI path yet, but this shouldn't be too difficult to
overcome.
The major disadvantage of NIR is that it doesn't have serialization.
radeonsi uses the fact that TGSI *is* a serialization format for two things:
- The internal shader cache, which avoids re-compiling the same shader
over and over again when it's linked into different programs. (This part
only needs a strong hash.)
- The (disk) shader cache stores the TGSI so that it's available in case
additional shader variants need to be compiled on the fly.
Some ideas:
1. Add a serialization format for NIR. This is the most straight-forward
solution, but it's a lot of work for a comparatively small feature.
1b. Use SPIR-V as a serialization format for NIR. This is more work for
serialization than a custom format due to the ceremony involved in
SPIR-V, but we already have deserialization. Also, it'd implicitly give
us an alternative GLSL-to-SPIR-V compiler, which is kind of neat.
2. Don't store TGSI/NIR in the (disk) shader cache. The reason we have
to do that right now is that radeonsi does multi-threaded compilation,
and so we cannot fallback all the way back to GLSL compilation if we
need to compile a new shader variant. However, once we properly
implement ARB_parallel_shader_compile, this issue will go away.
This doesn't address the internal shader cache, though.
3. Have st/mesa recognize when the same shader is linked into multiple
programs and avoid generating duplicate shader CSOs where possible. This
is non-trivial mostly because linking can map shader I/O into different
places, but I imagine that it would cover the majority of the cases
caught by radeonsi's internal shader cache.
4. Something else for the internal shader cache?
5. Use TGSI after all. TGSI really isn't such a bad format. It does have
some warts, like pretending that everything is a vec4, which means that
f64 support is already annoying, f16 support is going to be annoying,
the way we do UBOs should probably be re-written (since it cannot
support std430 packing, which would be nice to have), and real
(non-inline) function support will be nasty if we ever get there. But
TGSI works, it's patient and straightforward.
That said, NIR is nicer in several ways. Not using it just because it
can't do serialization would be sad, not to mention those ~8k LOCs of
SPIRV-to-NIR. We could go SPIRV-to-NIR-to-TGSI, of course, but that's
not exactly great for compiler performance.
All of this doesn't necessarily block the project of adding
GL_ARB_gl_spirv, but it'd be nice to think a bit ahead.
So, what does everybody think? I'm particularly interested in the
nouveau folks' take on the whole NIR vs. TGSI thing, and any ideas on
how to address the above questions.
Cheers,
Nicolai
--
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.
More information about the mesa-dev
mailing list