[Pixman] [PATCH] test: Add new fuzz tester targeting solid images

Ben Avison bavison at riscosopen.org
Thu May 7 11:31:36 PDT 2015


On Wed, 06 May 2015 13:25:43 +0100, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> this is written based on blitters-test.c, right? That would have been
> interesting to mention, so new reviewers like me can compare with
> existing code you mirrored.

Well, it's a new addition the family of fuzz testers - the others are
  affine-test
  blitters-test
  composite-traps-test
  glyph-test
  matrix-test
  region-contains-test
  rotate-test
  scaling-test

I didn't consciously set out to borrow from blitters-test in particular,
but out of that set it tests the widest range of Porter-Duff operators,
and (jointly with glyph-test) the widest range of pixel formats, so
looking back, that's obviously the one I lifted the operator and format
arrays from.

Aside from the immediate bug that prompted the creation of solid-test, I
wanted to increase the proportion of solid source and mask images over
those tested elsewhere, because they represent a sizeable proportion of
fast paths and therefore a larger "attack surface" for bugs than was
represented to date. Although blitters-test already does a fair amount of
testing of this, the proportions are low (1/8 for source images) and it
doesn't exercise pixman_image_create_solid_fill images at all. The only
two fuzz testers that do use pixman_image_create_solid_fill are
composite-traps-test and glyph-test: they don't use the full range of
Porter-Duff operators, and they don't use pixman_image_create_solid_fill
for mask images at all.

I think a number of your comments arise from the expectation that the
test was only to test what happens when you change the contents of a 1x1
bits image - hopefully my aim to improve testing of both types of solid
images in general helps explain things.

> This test hits the case of fully transparent/opaque -> arbitrary color
> change, but very rarely -> fully transparent/opaque, never starting
> with an arbitrary color. I understand the two latter cases are not
> really testable: setting the fast path flags for an arbitrary color
> when you could use fully transparent/opaque is not detectable via
> correctness tests. Are there any cases where more symmetric test case
> selection would be useful? If not, nevermind.

My thinking was that as long as we ensured that information about a solid
colour wasn't being cached, then we'd guard against my bug or any similar
ones being reintroduced. The two cases which are most likely to be
treated differently from the others, depending upon the operator, are
fully-transparent and fully-opaque, so as long as each of those featured
reasonably frequently on at least one side of the before/after divide, I
think that should detect any caching.

> Here's how I understand the code.
>
> Source image:
> - 50% chance for multi-pixel bits image
> - 25% chance for 1x1 bits image
> - 25% chance for solid image
>
> Mask image:
> - 50% chance to not be used
> - 12.5% chance for multi-pixel bits image without CA
> - 6.25% chance for 1x1 bits image without CA
> - 6.25% chance for solid image without CA
> - 12.5% chance for multi-pixel bits image with CA
> - 6.25% chance for 1x1 bits image with CA
> - 6.25% chance for solid image with CA
>
> Destination image:
> - always multi-pixel
>
> Both source and mask, when they are 1x1 bits images, have:
> - 50% chance to start fully opaque white
> - 50% chance to start fully transparent

Subtle distinction here: because the 1x1 image may use any pixel format,
the choices are between 0 and 2^bpp-1. Depending upon the format, there
may not be any pixel value which corresponds to transparent, and the
maximum value may not be opaque white (because the image may use a
palette).

> and then switch to arbitrary random color for the actual test.
>
> The problems this test attempts to reveal happen only with 1x1 bits
> images, so shouldn't the chances for 1x1 bits images be higher?

Well, 34% of the tests feature at least one 1x1 bits image. If you
include the solid images - to measure the number of tests which would
select fast paths which feature at least one solid component, that figure
rises to 63%. That includes 13% where both the source and mask are solid,
which is silly (it'd make more sense for the caller to multiply the
constant colours and use an operation with a solid source and no mask
image). So by increasing the ratios, while you'd cut the number of tests
with multi-pixel source and mask (which are redundant with blitters-test)
you'd raise the number of tests of unrealistic combinations.

There's a danger that a more sophisticated way to choose combinations of
images and operators to better reflect the share of different fast paths
would just obfuscate the code and could just as easily be achieved by
increasing the number of tests, so I didn't feel it was worth spending
too much brainpower on it.

Perhaps you'd be happier with a three-way split between op_n_bits,
op_n_bits_bits and op_bits_n_bits as in the version I'm just about to
post? That guarantees 100% of tests involve a solid image, and 67% of
tests involve at least one 1x1 bits image.

> Multi-pixel source or mask images are never written after the initial
> use, so aren't those useless in this test?

If you only use single-pixel images, then you're limiting the cases
you're testing to a subset of fast paths such as over_n_8888. If a fast
path for (say) add_8888_n_8888 was cacheing information about the solid
image, we'd want to pick that up too. The chances of anyone writing code
that cached information about the pixel values in a multi-pixel image, on
the other hand, feels remote, so I suspect the CPU cycles would be better
spent on doing more iterations than scrambling the pixel array.

>> +static pixman_image_t *
>> +create_image (const pixman_format_code_t *allowed_formats,
>> +              uint32_t                   *buffer,
>> +              pixman_format_code_t       *used_fmt)
>
> The 'buffer' argument here is kind of strange, because the caller does
> not know what the image size or format will be. But, it does work here,
> with the assumption that the size will be at most WIDTH * HEIGHT *
> uint32_t and the caller knows to allocate with that.

Pixman typically treats the pixel buffer as uint32_t * internally
irrespective of the colour depth - see struct bits_image, for example. It
makes some sense considering that there is a requirement that the row
stride is always an integer number of 32-bit words.

Since we have specified the list of possible pixel formats, we know that
none of them exceed 32bpp, so that's OK. By using buffers of size
WIDTH * HEIGHT * sizeof(uint32_t), we save ourselves the overhead of a
pair of heap operations per image per iteration.

Ben


More information about the Pixman mailing list