[Mesa-dev] Any plans to add type precision to GLSL IR (glsl_type etc.)?

Ian Romanick idr at freedesktop.org
Wed Oct 13 11:07:43 PDT 2010

Hash: SHA1

Aras Pranckevicius wrote:
> Hi,
> For GLSL optimizer (http://github.com/aras-p/glsl-optimizer) that is
> built on top of Mesa's GLSL2, I need to add native OpenGL ES 2.0
> precision support. Looks like right now Mesa's GLSL can parse
> precision qualifiers just fine (when OpenGL ES 2.0 option is used),
> but it does not do anything with them beyond the AST.
> Are there any plans to add precision qualifiers to glsl_type and
> relevant parts of GLSL IR? (some desktop GPUs internally can operate
> on different precisions, and supporting that could be a performance
> improvement in the future)
> If there are no plans to support precision qualifiers beyond AST
> anytime soon -- are there some caveats I need to know before I try to
> implement it myself? Or some suggestions on how to approach the
> problem?

As far as I'm aware, the only desktop hardware that supports multiple
precisions is NVIDIA.  Also as far as I'm aware the driver for NVIDIA
hardware in Mesa doesn't support the low precisions formats.  My guess
is that this is primarily because neither TGSI nor Mesa IR have any way
to communicate precision to the driver.  So, even if we did track
precision information after the AST, none of the drivers would do
anything with it.

That said, I don't think it would be difficult to add precision tracking
to the IR.  My first thought is that a field should be added to
ir_variable to track the declared precision of the variable.  A similar
field should then be added to ir_rvalue to track the precision of the
expression.  I believe the various constructors could do all the work to
determine the resulting precision of, for example, highp+lowp.

The real trick will be coming up with credible test cases.
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/


More information about the mesa-dev mailing list