Deep Color support
John Kåre Alsaker
john.kare.alsaker at gmail.com
Sun Apr 27 04:57:45 PDT 2014
On Sun, Apr 27, 2014 at 12:30 PM, Wolfgang Draxinger
<wdraxinger.maillist at draxit.de> wrote:
> On Sun, 27 Apr 2014 12:11:39 +0200
> John Kåre Alsaker
> <john.kare.alsaker at gmail.com> wrote:
>
>> I implemented support for ABGR16161616 framebuffers in mesa/wl_drm.
>> My patch has bit-rotted a bit now, but it gives you an idea about
>> what to do:
>> https://github.com/Zoxc/mesa/commit/73f39f1366287bab02c993cb3537980e89b3cdca
>>
>> My motivation for this was to have clients render with linear gamma.
>
> Well, unless there's a nonlinear scanout LUT, so far with current
> technology and software the values in the backing store go to the
> display linearly.
>
> Not that this was particularly sane; the sane thing would be to
> associate a color profile with the backing store and have the graphics
> system apply a color transform close to scanout – a task trivially to
> implement in a compositor.
I have patches which will do the linear to sRGB gamma conversion for
the gl-renderer in weston,
which could be trivially extended to use a 3D LUT for a full color conversion.
It will also do the sRGB to linear gamma conversion for clients not
rendering with linear gamma,
possibly using a EGL extension I wrote to do this on hardware.
I'd also like to see support for EGL 1.5 (which includes sRGB
framebuffers) in clients so they
can render in linear gamma without the cost of higher bit depths.
>
> I recently double checked with an oscilloscope, that all GPU/driver
> combinations I own (Intel/intel, NVidia/nvidia, NVidia/nouveau,
> AMD/fglrx, AMD/radeon), are well behaved, and they are. But of
> course you can see the staircase with only 8 bits per channel.
>
> One of the scope traces ended up in the StackOverflow answer
> http://stackoverflow.com/a/23030225/524368
Since VGA is a ting of the past, I'm more interested in what the display do.
I wonder if there's a tool to measure this with my Spyder3, even
though it's a moot
point after calibration.
>
>> One thing to note is that EGL clients usually will pick the highest
>> color depth by default.
>
> Which is a very sane rationale.
>
>> We'd likely want to prevent this somehow, even though it goes against
>> EGL spec.
>
> Why? Today we're no longer constraint by lack of graphics memory. The
> only valid argument would be the performance hit caused by increased
> memory bandwidth load. However you're normally not limited by
> framebuffer fill bandwidth, but by texture fetch bandwidth (and modern
> GPU's memory controller are full-duplex capable). And in
> render-to-texture situations you normally explicitly select the desired
> internal format for the textures, as the task at hand demands it.
We still are limited by both graphics memory and bandwidth.
A fullscreen 64bpp 4K client will use about 128 MB for the framebuffer.
On a high-DPI variant of my monitor that would be 225 MB. It would not take
many clients to fill my 1GB of VRAM.
Do note that with wayland, what clients really do is render to a texture,
which then get sampled by the compositor. So even if the GPU could hide
the framebuffer writes, sampling performance would likely be worse.
Another point is that clients usually will supply 32 bpp data, so the higher
bit depth will probably be wasted.
>
>
> Regards,
>
> Wolfgang
>
>
> _______________________________________________
> wayland-devel mailing list
> wayland-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/wayland-devel
More information about the wayland-devel
mailing list