[Nouveau] Nouveau Digest, Vol 131, Issue 3

Ilia Mirkin imirkin at alum.mit.edu
Fri Mar 2 22:29:04 UTC 2018


On Fri, Mar 2, 2018 at 5:16 PM, Mario Kleiner
<mario.kleiner.de at gmail.com> wrote:
> On 03/01/2018 07:21 PM, nouveau-request at lists.freedesktop.org wrote:
>>
>>
>> Message: 1
>> Date: Thu, 1 Mar 2018 08:15:55 -0500
>> From: Ilia Mirkin <imirkin at alum.mit.edu>
>> To: Mario Kleiner <mario.kleiner.de at gmail.com>
>> Cc: nouveau <nouveau at lists.freedesktop.org>
>> Subject: Re: [Nouveau] [PATCH] Fix colormap handling at screen depth
>>         30.
>> Message-ID:
>>
>> <CAKb7UvhnUF6W41vXHpC+Q+jVSROgnHuxd8CzvfOGHmYSq_xTHQ at mail.gmail.com>
>> Content-Type: text/plain; charset="UTF-8"
>>
>> NVLoadPalette is pretty hard-coded to 256. I haven't looked at what
>> all xf86HandleColormaps does, but it seems pretty suspicious. Also
>
>
> It's also pretty dead :). NVLoadPalette is not ever used, because nouveau
> hooks up the .gamma_set function in xf86CrtcFuncsRec, so xf86HandleColormaps
> ignores the NVLoadPalette pointer. Iow. dead code that can be removed. I'll
> send some follow up patch, once this one is in. We have similar dead code in
> intel-ddx and modesetting-ddx which only serves to confuse the reader.
>
>> note that the kernel currently only exposes a 256-sized LUT to
>> userspace, even for 10bpc modes.
>>
>
> Yes, but that doesn't matter. In xbgr2101010 mode, the gpu seems to properly
> interpolate between the 256 (or 257) hw lut slots, as far as my measurments
> go. The X-Server maintains separate color palettes, per-x-screen xf86vidmode
> gamma lut's and per-crtc RandR gamma lut's and munches them together to
> produce the final 256 slot hw lut for the kernel, up/downsampling if needed.

OK, so even if you're passing 1024 to xf86HandleColormaps, gamma_set
still only gets called with a 256-entry LUT? If so, that works nicely
here, but is not intuitive :)

> So adapting the values for xf86HandleColorMaps() is about properly sizing
> those internal palette's and lut's to avoid out-of-bounds segfaults or loss
> of precision somewhere in the whole multi-step remapping procedure, because
> one of the server internal tables is a bottleneck with too little slots.
>
> This variant is the one that avoids crashes and also visual artifacts that i
> at least observed on tesla gpu's at depth 30.
>
> One weird thing i still observed though is that in depth 30 xbgr2101010
> scanout mode nouveau used dithering when i tried to output a linear
> intensity ramp, despite me disabling dithering via the xrandr property. But
> that is an unrelated problem.

It's sending 8bpc data out to the screen, unless you're using a DP
monitor (and probably would need a Kepler GPU for that anyways).
Although setting dither to off should still kill the dithering...
probably some experimentation required.

I'm pretty sure I could tell that it was dithering for me on Kepler.
When I added support for 10bpc dither, the dither effect went away
(and it looked no different than the 8bpc gradient). I didn't try
explicitly disabling dithering -- I'll try that tonight and see what
happens (except I've got a Fermi plugged in now).

  -ilia


More information about the Nouveau mailing list