[Mesa-dev] [PATCH 01/17] mesa: consolidate definitions of max texture image units

Marek Olšák maraeo at gmail.com
Wed May 8 03:22:09 PDT 2013


On Thu, May 2, 2013 at 7:12 PM, Eric Anholt <eric at anholt.net> wrote:
> Marek Olšák <maraeo at gmail.com> writes:
>
>> Shaders are unified on most hardware (= same limits in all stages).
>> No idea what the assertion was good for.
>> ---
>>  src/mesa/main/config.h                 |    6 ++----
>>  src/mesa/main/context.c                |    6 ++----
>>  src/mesa/state_tracker/st_extensions.c |    2 +-
>>  3 files changed, 5 insertions(+), 9 deletions(-)
>>
>> diff --git a/src/mesa/main/config.h b/src/mesa/main/config.h
>> index 33b5ab0..76863bf 100644
>> --- a/src/mesa/main/config.h
>> +++ b/src/mesa/main/config.h
>
>> -   assert(MAX_COMBINED_TEXTURE_IMAGE_UNITS <= 8 * sizeof(GLbitfield));
>> -
>
> I think this assert is asserting that _EnabledUnits of gl_texture_attrib
> can fit the size of the Unit[] array, which seems important.

With Gallium drivers, _EnabledUnits is only used for the
fixed-function vertex program, where the number of texture coord units
is limited to 8. The fixed-function vertex program doesn't even use
the bits of _EnabledUnits, it only checks if the variable is non-zero,
so it could just be a boolean.

However, swrast uses _EnabledUnits, though the variable seems to be
used for fixed-function only.

Is this explanation satisfactory that _EnabledUnits doesn't need to
have more than 32 bits or am I missing something else?

Marek


More information about the mesa-dev mailing list