[Mesa-dev] [PATCH 30/30] mesa/st: require linear interpolation for ARB_texture_float

Mathias Fröhlich Mathias.Froehlich at gmx.net
Thu Nov 22 12:08:40 UTC 2018


Hi,

On Monday, 19 November 2018 20:17:38 CET Roland Scheidegger wrote:
> FWIW this looks like a rather similar incident to me what happened when mesa began to verify the max vertex stride (which needs to be 2048 with GL 4.4 whereas r600 can only do 2047) where I argued it's a much better idea to lie about the GL version there rather than the specific vertex stride bit, but I was rather unsuccessful and not everybody apparently shares this view...

If I had fully tracked the mailing list, Roland, you would have gotten at least
my +1 on the lie in the GL version instead of the max stride.
The using application would still have had the chance to query the limiting
value - may be with an off by one surprise wrt the standard paper, but with a
value that you can finally rely on. Now with lying about the stride, the
application side has no way anymore to query what the real limit is. The
application just may not work correctly if really everything that you can
query is within limits.
For this max stride value, the only excuse is that it's highly unlikely to find
these huge strides for vertex attributes, so the problem is very unlikely to show up.

IMO, the applications point of view or the applications authors point of view
is the one that should drive decisions. At first because this driver library has that 
one really major purpose to serve exactly those applications with a reliable and predictable
3d api. But what is predictable when an application cannot rely on the extensions
and constants it queries?

Two cents of somebody writing mostly such applications.

best
Mathias


> 
> ________________________________________
> From: mesa-dev <mesa-dev-bounces at lists.freedesktop.org> on behalf of Ilia Mirkin <imirkin at alum.mit.edu>
> Sent: Monday, November 19, 2018 5:37:58 PM
> To: Erik Faye-Lund
> Cc: ML Mesa-dev; Timothy Arceri; Emil Velikov
> Subject: Re: [Mesa-dev] [PATCH 30/30] mesa/st: require linear interpolation for ARB_texture_float
> 
> On Mon, Nov 19, 2018 at 11:30 AM Erik Faye-Lund
> <erik.faye-lund at collabora.com> wrote:
> >
> > On Mon, 2018-11-19 at 11:13 -0500, Ilia Mirkin wrote:
> > > On Mon, Nov 19, 2018 at 10:40 AM Erik Faye-Lund
> > > <erik.faye-lund at collabora.com> wrote:
> > > > On Mon, 2018-11-19 at 10:02 -0500, Ilia Mirkin wrote:
> > > > > Unfortunately this will drop GL 3.0 from Adreno A3xx. I think
> > > > > we'd
> > > > > rather fake linear interpolation with F32 textures which are
> > > > > never
> > > > > used than lose GL 3.0 there...
> > > >
> > > > Right...
> > > >
> > > > I guess this means that this GPU never really did support OpenGL
> > > > 3.0,
> > > > and will make some applications misbehave. There's definately
> > > > applications out there that will lead to surprisingly bad problems
> > > > when
> > > > features like these are not supported.
> > > >
> > > > For instance if an application tries to take a local gradient by
> > > > sampling a texture twice with a tiny epsilon (a common trick in
> > > > tangent-free normal mapping, for instance), it will essentially get
> > > > garbage, which can cause close to useless rendering.
> > > >
> > > > I've worked on applications that would have had problems like these
> > > > if
> > > > drivers report the wrong version, but could work correctly if they
> > > > report the right version.
> > > >
> > > > Either way, I don't believe faking like that belongs in core Mesa.
> > > > So
> > > > if the Freedreno developers really want this kind of behavior,
> > > > perhaps
> > > > something like this could be a better move?
> > > >
> > > > ---8<---
> > > > diff --git a/src/gallium/drivers/freedreno/freedreno_screen.c
> > > > b/src/gallium/drivers/freedreno/freedreno_screen.c
> > > > index 88d91a91234..de811371f05 100644
> > > > --- a/src/gallium/drivers/freedreno/freedreno_screen.c
> > > > +++ b/src/gallium/drivers/freedreno/freedreno_screen.c
> > > > @@ -260,6 +260,11 @@ fd_screen_get_param(struct pipe_screen
> > > > *pscreen,
> > > > enum pipe_cap param)
> > > >                 return 0;
> > > >
> > > >         case PIPE_CAP_TEXTURE_FLOAT_LINEAR:
> > > > +               /* HACK: A330 doesn't support linear interpolation
> > > > of
> > > > FP32 textures, but
> > > > +                * to keep OpenGL 3.0 support, we lie about it
> > > > here.
> > > > +                */
> > > > +               return is_a3xx(screen) || is_a4xx(screen) ||
> > > > is_a5xx(screen) || is_a6xx(screen);
> > > > +
> > > >         case PIPE_CAP_CUBE_MAP_ARRAY:
> > > >         case PIPE_CAP_SAMPLER_VIEW_TARGET:
> > > >         case PIPE_CAP_TEXTURE_QUERY_LOD:
> > > > ---8<---
> > > >
> > > > Alternatively, they could ask users to override the GL-version for
> > > > applications that need GL 3.0, but doesn't have problems with the
> > > > lack
> > > > of FP32-interpolation...
> > >
> > > GL 3.0 brings SO much stuff in though, and GL 3.1 brings core
> > > profiles.
> > >
> > > Your proposed solution will also expose the OES_bla ext, which we
> > > definitely don't want to do. I'd instead keep it loose. The hardware
> > > that doesn't support this stuff is generally targeted at ES. However
> > > it's convenient to have desktop GL both for test coverage (piglit) as
> > > well as regular use.
> > >
> > > Tons of desktop stuff doesn't work in Adreno. Starting with different
> > > cull modes for front and back. Setting polygon mode for quads to
> > > lines
> > > shows you the internal line. Edge mode isn't supported. Probably
> > > 10000
> > > other things.
> > >
> > > But it's still very useful to have GL 3.x advertised.
> >
> > As I tried to point out, that's only useful from one point of view.
> > From an application developer's point of view, it's *worse* to expose
> > GL 3.0 when it's not really supported. There's no way for applications
> > to tell if filtering will work or not. When the correct version is
> > reported, the application can provide a fallback path for the features
> > it need, or fall back to lower quality rendering.
> >
> > When you're outside the spec, you kinda have to pick your poison. But I
> > don't think a single driver wanting to fake the support should affect
> > all other drivers regardless.
> 
> You're looking at this as some hypothetical driver which supports a
> random smattering of extension enables, and trying to make mesa
> resilient against such an adversarial opponent.
> 
> But that's not what's going on here. Features come in packs. I think
> that a3xx on adreno is the only hardware affected by this change in
> practice.
> 
> >
> > And with the other legacy GL features that Adreno miss, those are IMO
> > completely different, exactly because these don't force other drivers
> > to like about their feature-set. So, I agree that it's not ideal, but
> > there's not really anything to do about those missing features.
> >
> > But if you want to keep the behavior the same, perhaps you could setenv
> > MESA_GL_VERSION_OVERRIDE when creating the screen for A3xx?
> 
> But that should be the default behavior - the desktop support is
> imperfect there. Nobody really cares. Why make users jump through
> hoops?
> 
>   -ilia
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&data=02%7C01%7Csroland%40vmware.com%7C32d9bf2cfbdf473dba4608d64e3d6731%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636782422982448947&sdata=ypl5%2Foeza8jDad5Oe22WeLK2YZUItSrvV0M%2BRGAxlok%3D&reserved=0
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
> 






More information about the mesa-dev mailing list