[Nouveau] nouveau 30bpp / deep color status

Ville Syrjälä ville.syrjala at linux.intel.com
Wed Feb 7 17:01:55 UTC 2018


On Wed, Feb 07, 2018 at 06:28:42PM +0200, Ville Syrjälä wrote:
> On Sun, Feb 04, 2018 at 06:50:45PM -0500, Ilia Mirkin wrote:
> > In case anyone's curious about 30bpp framebuffer support, here's the
> > current status:
> > 
> > Kernel:
> > 
> > Ben and I have switched the code to using a 256-based LUT for Kepler+,
> > and I've also written a patch to cause the addfb ioctl to use the
> > proper format. You can pick this up at:
> > 
> > https://github.com/skeggsb/linux/commits/linux-4.16 (note the branch!)
> > https://patchwork.freedesktop.org/patch/202322/
> > 
> > With these two, you should be able to use "X -depth 30" again on any
> > G80+ GPU to bring up a screen (as you could in kernel 4.9 and
> > earlier). However this still has some deficiencies, some of which I've
> > addressed:
> > 
> > xf86-video-nouveau:
> > 
> > DRI3 was broken, and Xv was broken. Patches available at:
> > 
> > https://github.com/imirkin/xf86-video-nouveau/commits/master
> > 
> > mesa:
> > 
> > The NVIDIA hardware (pre-Kepler) can only do XBGR scanout. Further the
> > nouveau KMS doesn't add XRGB scanout for Kepler+ (although it could).
> > Mesa was only enabled for XRGB, so I've piped XBGR through all the
> > same places:
> > 
> > https://github.com/imirkin/mesa/commits/30bpp
> > 
> > libdrm:
> > 
> > For testing, I added a modetest gradient pattern split horizontally.
> > Top half is 10bpc, bottom half is 8bpc. This is useful for seeing
> > whether you're really getting 10bpc, or if things are getting
> > truncated along the way. Definitely hacky, but ... wasn't intending on
> > upstreaming it anyways:
> > 
> > https://github.com/imirkin/drm/commit/9b8776f58448b5745675c3a7f5eb2735e3989441
> > 
> > -------------------------------------
> > 
> > Results with the patches (tested on a GK208B and a "deep color" TV over HDMI):
> >  - modetest with a 10bpc gradient shows up smoother than an 8bpc
> > gradient. However it's still dithered to 8bpc, not "real" 10bpc.
> >  - things generally work in X -- dri2 and dri3, xv, and obviously
> > regular X rendering / acceleration
> >  - lots of X software can't handle 30bpp modes (mplayer hates it for
> > xv and x11 rendering, aterm bails on shading the root pixmap, probably
> > others)
> > 
> > I'm also told that with DP, it should actually send the higher-bpc
> > data over the wire. With HDMI, we're still stuck at 24bpp for now
> > (although the hardware can do 36bpp as well). This is why my gradient
> > result above was still dithered.
> > 
> > Things to do - mostly nouveau specific, but probably some general
> > infra needed too:
> >  - Figure out how to properly expose the 1024-sized LUT
> 
> We have the properties in the kernel. Not sure if x11 could expose it
> to clients somehow, or would we just have to interpolate the missing
> bits in the ddx?

Oh, and I think we're going to have to come up with a fancier uapi for
this stuff because in the future the input points may not be evenly
spaced (for HDR stuff). Also the hardware may provide various different
modes for the gamma LUTs with different tradeoffs. So we may even want
to somehow try to enumerate the different modes and let userspace pick
the mode that best suits its needs.

-- 
Ville Syrjälä
Intel OTC


More information about the Nouveau mailing list