[Pixman] Planar YUV support

Benjamin Otte otte at redhat.com
Thu May 13 07:02:43 PDT 2010

(I accidentally sent this privately to Soeren, here's a forward to the

On Wed, 2010-05-12 at 22:04 +0200, Soeren Sandmann wrote:
> Hi Benjamin,
> Your patches add YCbCr support at the first stage in the sense that it
> converts each Y sample to a8r8g8b8 using the nearest neighbor Cb and
> Cr samples.
You did not look at the (old) YUV branch by accident and used the
current (planar) branch, right? I'm a bit confused because you reference
the PIXMAN_COLOR_SPACE enum from the new branch but refer to a design
from the old branch.

The new pipeline then looks like this:
>       * Widen to 8 bit components
>       * Extend 
>       * Interpolate between samples according to filter
>       * Transform
>       * Convert to RGB coding
>       * Resample
>       * Combine
>       * Store

In the planar branch, the colorspace conversion is done before
combining, in general_composite_rect() to be exact. So while the branch
still does the interpolation of subsampled images too early, it seems to
otherwise fit your description of how things should look quite well.

> But the PIXMAN_COLOR_SPACE_ARGB_UNMULTIPLIED doesn't fit in here
> because premultiplication is something that has to happen _before_
> interpolation, whereas color decoding needs to happen after. This
> suggests to me that those two things don't belong in the same enum. I
> do think support for unpremultiplied formats is worthwhile, but it
> seems orthogonal to YCrCb support.
I added unmultiplied support for one simple reason really: YCbCr with
alpha channel is unmultiplied. So it seemed rather trivial to support
unmultiplied ARGB, too.

I'm also not sure where in interpolation or resampling an operation is
non-linear and would result in wrong values for unmultiplied color
spaces, but in those few cases, it seems worthwhile to use a different
function that handles those correctly, no?

> In practical terms, the above means YCrCb processing will have to go
> through the bits_image_fetch_transformed() path and that the
> fetch_scanline() and fetch_pixel() function pointers should be
> considered *sample* fetchers that simply widen and complete the
> samples wrt. their existing color coding, but doesn't try to do
> interpolation or color conversion. The x and y coordinates passed to
> them must then always be integers and refer to samples. If you pass
> (1, 1) to one of those fetchers, the result will be the second sample
> on the second row of the component in question.

The current fetch_raw implementations for subsampled formats do the (I
guess most common) operation of fetching the NEAREST sample for
subsampled formats, so you can use them fine for integer translations.

That said, my idea was to have a replacement for
bits_image_fetch_transformed() that takes care of subsampled formats,
but I was never quite sure on how to best implement it, which is why I
didn't do it. I like your idea of a fetch_component function, though I'm
not quite sure on where to do the image => component conversion for the
x/y coordinates.

> Maybe your idea of eliminating the get_pixel() altogether and just use
> fetch_scanline() with a length of one could make this simpler.
Considering there was no measurable performance impact, I thought of it
as a great idea since day one ;)


More information about the Pixman mailing list