[Mesa-dev] [PATCH-RFC] i965: do not advertise MESA_FORMAT_Z_UNORM16 support

Chia-I Wu olvaffe at gmail.com
Wed Feb 19 21:02:34 PST 2014


On Thu, Feb 20, 2014 at 7:03 AM, Kenneth Graunke <kenneth at whitecape.org> wrote:
> On 02/19/2014 02:27 PM, Ian Romanick wrote:
>> On 02/19/2014 12:08 PM, Kenneth Graunke wrote:
>>> On 02/18/2014 09:48 PM, Chia-I Wu wrote:
>>>> Since 73bc6061f5c3b6a3bb7a8114bb2e1ab77d23cfdb, Z16 support is
>>>> not advertised for OpenGL ES contexts due to the terrible
>>>> performance.  It is still enabled for desktop GL because it was
>>>> believed GL 3.0+ requires Z16.
>>>>
>>>> It turns out only GL 3.0 requires Z16, and that is corrected in
>>>> later GL versions.  In light of that, and per Ian's suggestion,
>>>> stop advertising Z16 support by default, and add a drirc option,
>>>> gl30_sized_format_rules, so that users can override.
>>
>>> I actually don't think that GL 3.0 requires Z16, either.
>>
>>> In glspec30.20080923.pdf, page 180, it says: "[...] memory
>>> allocation per texture component is assigned by the GL to match the
>>> allocations listed in tables 3.16-3.18 as closely as possible.
>>> [...]
>>
>>> Required Texture Formats [...] In addition, implementations are
>>> required to support the following sized internal formats.
>>> Requesting one of these internal formats for any texture type will
>>> allocate exactly the internal component sizes and types shown for
>>> that format in tables 3.16-3.17:"
>>
>>> Notably, however, GL_DEPTH_COMPONENT16 does /not/ appear in table
>>> 3.16 or table 3.17.  It appears in table 3.18, where the "exact"
>>> rule doesn't apply, and thus we fall back to the "closely as
>>> possible" rule.
>>
>>> The confusing part is that the ordering of the tables in the PDF
>>> is:
>>
>>> Table 3.16 (pages 182-184) Table 3.18 (bottom of page 184 to top of
>>> 185) Table 3.17 (page 185)
>>
>>> I'm guessing that people saw table 3.16, then saw the one after
>>> with DEPTH_COMPONENT* formats, and assumed it was 3.17.  But it's
>>> not.
>>
>> Yay latex!  Thank you for putting things in random order because it
>> fit better. :(
>>
>>> I think we should just drop Z16 support entirely, and I think we
>>> should remove the requirement from the Piglit test.
>>
>> If the test is wrong, and it sounds like it is, then I'm definitely in
>> favor of changing it.
>>
>> The reason to have Z16 is low-bandwidth GPUs in resource constrained
>> environments.  If an app specifically asks for Z16, then there's a
>> non-zero (though possibly infinitesimal) probability they're doing it
>> for a reason.  For at least some platforms, isn't there "just" a
>> work-around to implement to fix the performance issue?  Doesn't the
>> performance issue only affect some platforms to begin with?
>>
>> Maybe just change the check to
>>
>>    ctx->TextureFormatSupported[MESA_FORMAT_Z_UNORM16] =
>>       ! platform has z16 performance issues;
>
> Currently, all platforms have Z16 performance issues.  On Haswell and
> later, we could potentially implement the PMA stall optimization, which
> I believe would reduce(?) the problem.  I'm not sure if it would
> eliminate it though.
>
> I think the best course of action is:
> 1. Fix the Piglit test to not require precise depth formats.
> 2. Disable Z16 on all generations.
> 3. Add a "to do" item for implementing the HSW+ PMA stall optimization.
> 4. Add a "to do" item for re-evaluating Z16 on HSW+ once that's done.
I've sent a fix for the piglit test.  What is "PMA stall
optimization"?  I could not find any reference to it in the public
docs.


>
> --Ken
>



-- 
olv at LunarG.com


More information about the mesa-dev mailing list