[PATCH 02/10] compiler.h: add is_const() as a replacement of __is_constexpr()

Rasmus Villemoes ravi at prevas.dk
Mon Dec 9 09:59:44 UTC 2024


On Sat, Dec 07 2024, Linus Torvalds <torvalds at linux-foundation.org> wrote:

> On Sat, 7 Dec 2024 at 04:24, Vincent Mailhol <vincent.mailhol at gmail.com> wrote:
>>
>> > No good - expands everything twice.
>>
>> And? __is_const_zero() does not evaluate its arguments, so no side effect:
>
> No, the problem is literally the expansion.
>
> Double expansion of these fundamental helpers gets exponential,
> because they are used in various nested ways in other fundamental
> helpers.
>
> That's why we then spent so much effort on trying to clean up the
> min/max macros, because a single line of code would expand to
> literally tens of megabytes of horrific expansions.
>
> And the problem with these things is that you can't make them inline
> functions, so they have to be macros, and then you build up other
> macros using them (like that "clamp()" macro), and it really gets
> horrendous and affects the build time.
>
> And yes, it is very sad. Particularly since a compiler would have a
> really easy time with some nice helper builtins.
>
> Of course, often the compiler *does* have helper builtins, but we
> can't use them, because they aren't *quite* the right thing.

One thing I've been thinking about when all this comes up is: What if
the compilers gave us (and the same for _min):

  __builtin_max(T, e1, e2, ...)
  __builtin_max(e1, e2, ...)

with T being a type, e1... expressions, the latter being the former with
T being the result of usual promotion on the types of the expressions,
and the former having these semantics:

(1) If all the expressions are ICE, so is the whole thing.

(2) It's a compile-time error if the values of the expressions are not
    guaranteed to fit in T (that also applies in case (1)), but this
    should not be thrown by the front-end but only after optimizations
    have had a chance.

(3) Obviously: Every expression is evaluated exactly once and the result
    is the maximum of those, of type T.

For (2), I'd expect trivial value-range analysis to allow something like

  int x;

  ...
  if (x < 0)
    bail;
  size_t y = max(x, sizeof(foo));

Of course, specifying exactly which optimizations one can rely on having
been applied is impossible, but it's the same with our current
BUILD_BUG_ON() - many of them would trigger at -O0.

Then we could just have _one_ simple #define max __builtin_max , which
would work at file-scope, automatically have max3 etc. (because I'd
imagine it would not be much harder for the compiler to just provide the
variadic version if it has code to compute the max of two already), and
none of the preprocessor issues would apply.

Dear Santa: Pretty please?

Rasmus

Footnotes:

This is of course very kernel-centric. A compiler developer
doing this would probably have to think about "what if floating point
types are in the mix". I wouldn't mind if that was just disallowed, but
I can see how that might be a bit odd. I don't think it's hard to amend
the rules to that case - rule 2 could probably be used as-is, and (3)
could say "if any expr are NaN, so is the whole thing" (and if one cares
which NaN, just the first among the expressions); inf values don't need
special treatment wrt. min/max.

With my math hat on, I'd want the zero-expressions variant
__builtin_max(int) to evaluate to INT_MIN ('cause that's the neutral
element for the binary max of two ints) and similarly for other types,
but it's probably better to just require at least two expressions.


More information about the Intel-gfx mailing list