[Pixman] [PATCH] test: Add cover-test

Ben Avison bavison at riscosopen.org
Wed Aug 26 13:26:57 PDT 2015


On Thu, 04 Jun 2015 14:00:30 +0100, Pekka Paalanen <ppaalanen at gmail.com> wrote:

> On Tue, 26 May 2015 23:58:30 +0100
> Ben Avison <bavison at riscosopen.org> wrote:
>
>> +#define SQRT_0POINT5_0POINT32_FIXED (0xB504F334u)
>
> Ok, I checked. Mathematically it is equivalent, but maybe a slightly
> more obvious name would be INV_SQRT_2_...?

Fair enough, renamed (will repost when I've resolved everything else
below).

>> +#define MAX_INC (SQRT_0POINT5_0POINT32_FIXED >> (31 - 16 - LOG2_MAX_FACTOR))
>
> This evaluates to 11.314 decimal I think. It does match 2^3.5 with an
> error in the order of 3e-6, is what Octave tells me. log2(abs(error)) =
> -18.321, so if I'm thinking right, it means around 18 fractional bits
> being correct, which means pixman_fixed_t should be accurate to the
> last bit? I suppose it should be obvious from the integer math, but I
> just checked with Octave.

It's basically just the largest random number that can be generated by
random_scale_factor(), in other words if you start out with the largest
number and shift it right by the smallest number of places. I hope that
makes it clearer where it comes from - I'll update the comment.

> MAX_INC is a pixman_fixed_t value, right? Could use a comment for that,
> or making it a static const variable.

Yes. I've a feeling some compilers might force a load and runtime
calculation of expressions if you use static const, so I'll stick a cast
in the definition (being careful because pixman_fixed_t is signed but we
need to use an unsigned shift to generate the number).

> I think it would be valuable to explain these high level goals in a
> comment at the top, even when they are in the commit message.

I've duplicated the commit message in a comment.

>> +static pixman_fixed_t
>> +random_scale_factor(void)
[...]
> Out of curiosity and because it's hard to reason about these things, I
> did an experiment, and generated 10 million samples with this function.
> The resulting distribution in log2 space is here:
>
> https://people.collabora.com/~pq/pixman/random_scale_factor.png
>
> I think it is close to uniform enough, right? And covers the expected
> range.

Nice diagram. For these purposes, I'd say it was close enough to uniform;
no particular value is more than 2x more or less likely than any other.
Without the shift step in the random number generator, what you'd have
instead would be a ramp with one end being soemthing like 128x more
likely than the other. You'd also get effects such as the probability of
a plot which was an enlargement in both X and Y being only 1 in 64, even
though that's quite a likely outcome in real life; this way it's 1 in 4.

>> +static pixman_fixed_t
>> +calc_translate (int            dst_size,
>> +                int            src_size,
>> +                pixman_fixed_t scale,
>> +                pixman_bool_t  low_align,
>> +                pixman_bool_t  bilinear)
>> +{
>> +    pixman_fixed_t ref_src, ref_dst, scaled_dst;
>> +
>> +    if (low_align)
>> +    {
>> +        ref_src = bilinear ? pixman_fixed_1 / 2 : pixman_fixed_e;
>
> Why the bilinear case is not pixman_fixed_1 / 2 + pixman_fixed_e?

At this point we're determining the translations required so that the
first destination pixel, when transformed into source space, has exactly
the lowest value it can have without straying into the repeat zones.

With bilinear scaling, a coordinate of pixman_fixed_1 / 2 has pixel value
determined only by source pixel 0. A coordinate of pixman_fixed_1 * 3 / 2
has pixel value determined only by source pixel 1. Any value in between
has to be assumed to contain elements from both pixels 0 and 1.

With nearest scaling, any coordinate between pixman_fixed_e and
pixman_fixed_1 inclusive is determined by source pixel 0. Yes, this is
slightly counter-intuitive, but it's the way Pixman has always done it.
See for example in bits_image_fetch_nearest_affine() in
pixman-fast-path.c where we have the lines

	x0 = pixman_fixed_to_int (x - pixman_fixed_e);
	y0 = pixman_fixed_to_int (y - pixman_fixed_e);

There is no equivalent offset by pixman_fixed_e for bilinear scaling, for
example in fast_bilinear_cover_iter_init() also in pixman-fast-path.c:

      info->x = v.vector[0] - pixman_fixed_1 / 2;
      info->y = v.vector[1] - pixman_fixed_1 / 2;

All the other implementations follow suit (they have to, or they wouldn't
pass the "make check" suite.)

>> +    scaled_dst = ((uint64_t) ref_dst * scale + pixman_fixed_1 / 2) / pixman_fixed_1;
>
> ref_dst is 16.16 fp, scale is 16.16 fp, so the product is 32.32 fp, but
> adding pixman_fixed_1/2 to that actually means +0.5/65536, not +0.5.
> And the final division then just scales it back to .16 fp.
>
> Basically it's adding 0.5 * pixman_fixed_e... for rounding to nearest
> representable value?

Yes, the transform coefficients, source and destination coordinates are
all 16.16 fixed point, and round-to-nearest is how Pixman handles this in
all sorts of places, for example in pixman_transform_point_31_16_3d().
It's worth noting that since we increment the destination coordinate by
0x10000 for each output pixel, the change in the transformed destination
coordinate is expressable exactly as a 16.16 fixed-point number, and so
we don't have to worry about the rounding being in a different direction
 from pixel to pixel.

>> +    /* What happens when we are close to the edge of the first interpolation step? */
>> +    if (prng_rand_n (2))
>> +        offset += (pixman_fixed_1 >> BILINEAR_INTERPOLATION_BITS) - 16;
>
> I had to draw this on paper before I finally got it. Right. Assuming I
> guessed the meaning of INTERPOLATION_BITS right (number of most
> significant fractional bits used in computations).

Yes. It's as though you zero any less significant fractional bits before
calculating the weighting between source pixels (or the other way of
looking at it is that there are at most 2^BILINEAR_INTERPOLATION_BITS
steps between neighbouring pixels). If you are accumulating the fixed-
point coordinates from one pixel to the next, you do need to keep track
of the least significant bits though.

>> +static void
>> +check_transform (pixman_image_t     *dst_img,
>> +                 pixman_image_t     *src_img,
>> +                 pixman_transform_t *transform,
>> +                 pixman_bool_t       bilinear)
>> +{
[...]
>> +    if (bilinear)
>> +    {
>> +        assert (v1.vector[0] >= pixman_fixed_1 / 2);
>> +        assert (v1.vector[1] >= pixman_fixed_1 / 2);
>
> Does this not need to be pixman_fixed_1 / 2 + pixman_fixed_e?

Same thing as before, no there isn't an offset by pixman_fixed_e in the
bilinear case.

>> +    else
>> +    {
>> +        assert (v1.vector[0] >= pixman_fixed_e);
>> +        assert (v1.vector[1] >= pixman_fixed_e);
>
> Ensure src x,y are in the top-left corner pixel area. Exactly 0 would
> hit the previous pixel outside of the image, so that's why need to have
> greater or equal to eps. Did I get that right?

Yes - due to the way nearest scaling is defined, coordinate 0.0 is
actually off the left (or top) of the source image.

>> +#ifdef USE_OPENMP
>> +#pragma omp threadprivate (src_imgs)
>> +#pragma omp threadprivate (mask_bits_img)
>> +#pragma omp threadprivate (fence_images_created)
>> +#endif
>
> This is the only reference to OpenMP here, but I see that
> fuzzer_test_main() is the one having the omp parallel section, which
> then calls here.

Correct. I was getting some very odd failures until I worked that out. :-)

> Now that I am looking at image_endian_swap() implementation in
> utils.c...
>
> <offtopic>
> - Does endianess really affect the order of bits in a byte in Pixman
>   formats? How can that be meaningful? And it's only done for 1 and 4
>   bits pp formats?
>
> egads, the definition must really change meaning with endianess:
> http://cgit.freedesktop.org/pixman/tree/pixman/pixman-access.c#n49
> </offtopic>

I think that's mainly just because Pixman doesn't define any 2bpp pixel
formats!

It actually makes a lot of sense if you're generally operating little-
endian mode to pack pixels into bytes in little-endian order as well,
otherwise picking pixels out of words becomes rather tricky (and being
forced to do all pixel manipulation using byte quantities is inherently
inefficient).

I remember a project I was working on in the 1990s where the software I
was working on was designed for video hardware that only worked fully
little-endian. When we wanted to port it to another system that only
offered mixed-endian (big-endian pixels within bytes, little-endian bytes
within words) it was a major headache! Presumably it's a legacy of early
IBM PCs bolting a big-endian graphics card onto a little-endian
processor, as I think it was the standard way Windows bitmaps always
worked. I believe classic Macs were fully big-endian, by contrast (big-
endian pixels within bytes, big-endian bytes within words) which is at
least self-consistent.

> - Looks like it processes the whole stride, not width * bytespp, which
>   means this is going to explode on fenced images in the padding,
>   right?

Yes, looks like it is. Good spot. That probably does belong in a separate
patch; are you volunteering?

> The mask image is likely a different size (4096 wide, not 1024 like
> src), so the transformation for mask is quite different from src.
> That's just fine, right?

I can't see that being a problem. Many of the existing Pixman tests have
the size, scaling factors and/or filtering mode for masks set differently
 from source images, even though I'm a bit sceptical how often that would
happen in the real world.

> Oh! Because image sizes depend on the value returned by getpagesize(),
> doesn't this mean this test can only pass on 4096 B page size?
> Should this test be skipped on other systems? Maybe with a plea to add
> the crc from that system for their page size in main()?
>
> (Return 77 from main() is the signal for SKIP.)

<thinks> You're right, there is a problem there because if
fence_image_create_bits() makes the image a size other than what we're
expecting, most of the pixels set by fence_image_randmemset() will have
different values.

4K is a very common page size at the present time, so it's tempting to
bail out if it differs. The alternative I suppose would be to
prng_randmemset() into a temporary non-fenced image of size MIN_SRC_WIDTH
x SRC_HEIGHT and then blit it twice, once left-aligned and once right-
aligned, into the fenced image. I think that should work...

>> +    if (verbose)
>> +    {
>> +        printf ("op=%s\n", operator_name (op));
>> +        printf ("src_fmt=%s, dst_fmt=%s, mask_fmt=%s\n",
>> +                format_name (src_fmt), format_name (dst_fmt),
>> +                format_name (mask_fmt));
>
> Printing the image widths might be good, it would show if they ever
> change.

Well, the destination width is definitely a constant, and we always plot
the full width of the destination image, which for a given scale factor
is therefore a constant number of source pixels (which will be either
left or right aligned within the image). If the same pixels are always
blitted to the left and right of the source image, as I suggested above,
then the actual width of the fenced images *should* be inconsequential.

>> +int
>> +main (int argc, const char *argv[])
>> +{
>> +    exact = getenv ("EXACT") != NULL;
>
> Would be nice to print a message when EXACT is defined, so it's clear
> it is there.

Fair enough.

>> +
>> +    return fuzzer_test_main ("cover", 2000000,
>> +                             exact ? 0xAEB36C31 : 0xF972EE43,
>> +                             test_cover, argc, argv);

Note so self: we probably want alternate checksums at least for when
BILINEAR_INTERPOLATION_BITS == 4.

>> * We need to test alignment with both the extreme minimum and extreme
>>    maximum source coordinates permitted. Given that the scale factors
>>    can't be assumed to be nice numbers, I think we need to test them
>>    independently, using the translation offsets in the transformation
>>    matrix to ensure they align as desired.
>
> To do?

No - that's covered by supporting both left- and right-aligning (and top-
and bottom-) in calc_translate(). Translation is done entirely using the
transformation matrix because the src_x, src_y, mask_x and mask_y
parameters to pixman_image_composite() only allow for integer positions
in source/mask image coordinates and we need 16.16 fixed-point precision.

>> * Possibly test "near miss" cases as well, though these would probably
>>    want to be weighted so that positions at or closer to the limit are
>>    checked most.
>
> I think it does this with the fuzz-factors (random_offset).

Yes, that's what the fuzz factors were for.

Ben


More information about the Pixman mailing list