[Nouveau] Nouveau Digest, Vol 131, Issue 3

Mario Kleiner mario.kleiner.de at gmail.com
Mon Mar 5 06:17:12 UTC 2018


On 03/03/2018 12:59 AM, Ilia Mirkin wrote:
> On Fri, Mar 2, 2018 at 6:46 PM, Mario Kleiner
> <mario.kleiner.de at gmail.com> wrote:
>> On 03/02/2018 11:29 PM, Ilia Mirkin wrote:
>>> OK, so even if you're passing 1024 to xf86HandleColormaps, gamma_set
>>> still only gets called with a 256-entry LUT? If so, that works nicely
>>> here, but is not intuitive :)
>>
>> Yes. Lots of remapping in the server, i get dizzy everytime i look at it,
>> and forget almost immediately how stuff fits together when i don't look at
>> it. Anyway, the final downsampling from 1024 -> 256 hw lut happens in
>> xf86RandR12CrtcComputeGamma(), see
>>
>> https://cgit.freedesktop.org/xorg/xserver/commit/?id=b5f9fcd50a999a00128c0cc3f6e7d1f66182c9d5
>>
>> for the latest. I'll propose that one to get cherry-picked into the
>> server-1.19 branch as well.
> 
> Hrmph. That means we should try to adjust the gamma_set helper to do
> the sampling when receiving a 1024-sized LUT, if people will use older
> X servers (seems likely). Should hopefully be straightforward, to
> handle just that one case.

I think we never receive anything but a 256 slot LUT via gamma_set 
afaics? The server initializes xf86Crtc's gamma_size to 256 at startup, 
and none of the ddx'es ever overrides that with actual info from the kernel.

What happens on older servers without that patch iff color depth 30 is 
selected is simply that the gamma table updates no-op, so the server 
runs with an identity gamma table setup at startup. Not perfect, but 
also not that bad, given that probably most people run their setups with 
a gamma of 1.0 anyway. At default depth 24 stuff works as usual.

> 
>>> It's sending 8bpc data out to the screen, unless you're using a DP
>>> monitor (and probably would need a Kepler GPU for that anyways).
>>
>>
>> I think a long time ago i tested 10 bpc output over VGA with the proprietary
>> driver on GeForce 8800, and the current readme for the NVidia blob says it
>> can do 10 bpc over VGA and DisplayPort, but only dithering over DVI and
>> HDMI.
> 
> I think that 10bpc HDMI support came with HDMI 1.3. Older devices (esp
> Tesla era) won't support that. Intuitively seems like it should Just
> Work (tm) over VGA. Not sure which DP version supported 10bpc+.
> 
>> I think i read somewhere that at least Pascal could do some deep color
>> output over HDMI as well, which makes sense for HDMI-based HDR-10 support.
> 
> I believe Kepler+ devices (and perhaps GF119 as well) should support
> higher bpc over HDMI. However there's no support for that in nouveau
> right now. I happen to have a monitor (TV) that advertises 12bpc
> support, so I may play with it.
> 
> Here's how dithering is controlled:
> 
> https://github.com/envytools/envytools/blob/master/rnndb/display/nv_evo.xml#L435
> 
> (and also exposed in the public display class docs at
> http://download.nvidia.com/open-gpu-doc/Display-Class-Methods/2/ )
> 
> In theory turning off dithering should also flip off that ENABLE bit,
> but perhaps we messed something up. Or HDMI is extra-special and
> doesn't respect that bit and always has it enabled.
> 
>    -ilia
> 


More information about the Nouveau mailing list