[Pixman] [PATCH] test: Add cover-test
siarhei.siamashka at gmail.com
Thu Sep 3 09:14:05 PDT 2015
On Thu, 03 Sep 2015 13:59:07 +0100
"Ben Avison" <bavison at riscosopen.org> wrote:
> On Thu, 03 Sep 2015 11:13:25 +0100, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> > Unless, you go and change the implementation meaning of Pixman's cover
> > flags which, reading your other reply, is what you are doing. I'm
> > finally starting to see it.
> Glad the penny has dropped! The issue has been muddied somewhat by the
> 8*epsilon border. Perhaps I should have reposted the series as soon as it
> became clear what the history was in that respect.
Yes, sure. I'm also waiting for an updated patch set. Because dealing
with the 8*epsilon border separately is a rather major change,
affecting the way how the set is split into patches and the wording
of commit messages. We need a new patch set before starting a new
review round. Sorry for not communicating this explicitly enough.
> (from your other post on this thread)
> > This is *not* for protecting against out-of-bounds access, but this is
> > allowing for the extreme cases where a cover path is still
> > *theoretically* possible. (Ignoring what the COVER flags actually mean
> > in Pixman implementation.)
> Yes. Pixman as a whole should be able to handle any coordinates without
> bounds violations. The COVER_CLIP flags just determine the threshold
> coordinates at which it chooses to use a cover path rather than the
> appropriate repeat path (PAD, NONE, NORMAL or REFLECT); while you would
> get the correct mathematical result even if always use the repeat path,
> the cover path is very likely to be faster because it will be implemented
> in a way that assumes it doesn't need to do bounds checking (because the
> COVER_CLIP calculation has effectively done the bounds check for it).
For BILINEAR scaling, the NONE and PAD repeats generally do not need
bounds checking either. So they should be as fast as COVER, except for
having a bit higher setup overhead.
And in the special edge case where bilinear weight coefficients are
zero for the pixels fetched from the outside of the source image
(your new interpretation of the COVER flag), we can also safely reduce
NORMAL and REFLECT repeats to NONE or PAD.
> Previously, the flag definition was overly cautious, resulting in some
> cases which could have been handled by cover paths instead being sent to
> repeat paths. I first encountered this myself with lowlevel-blt-bench,
> which routinely used repeat paths for its bilinear tests
Oh, this was actually intended. The bilinear scaling benchmark in
the lowlevel-blt-bench program tries to stress the use case, where
we are fetching pixels from the outside of the source image a little
bit. The reason is that this is quite typical for bilinear scaling
and that's how it is designed to work.
If this whole ordeal with the new COVER flag interpretation was just
a way to make it run fast in lowlevel-blt-bench, then this new
interpretation may not make much real practical sense. And why even
stop here? If we can prove that the scaling factor, used by
lowlevel-blt-bench, in fact behaves exactly as identity transform
for certain composite operations, then it probably can be optimized
Well, maybe we need some improvements for the scaling benchmarks in
lowlevel-blt-bench. We can just measure performance numbers for
NONE, PAD, NORMAL, REFLECT and COVER cases and report them all. The
scaling factor needs to be adjusted, so that we hit these particular
code paths. When we get the performance numbers for different repeat
cases, then we can easily compare them. Like I said, NONE and PAD
performance should be normally roughly the same as COVER. Not all
the fast paths and iterators can handle NONE and PAD repeats
efficiently yet (SSSE3 is a good example), but we can and should
eventually get there.
> but cover paths for its nearest tests. However, we have to be careful
> when adjusting the flag calculations, as the cases where bounds violations
> occur may only be 1*epsilon wide, and so would have a high chance of not
> being detected by the previous test suite in a reasonable length of time
> - hence the need for a new test program.
For scaling, previously we had 'scaling-test', 'affine-test' and
'scaling-crash-test' programs, which are relying on the public
pixman API. And 'scaling-helpers-test' program as a unit test for
the internal 'bilinear_pad_repeat_get_scanline_bounds()' function,
which is also very important.
In the case of NEAREST scaling, potential reads outside of the
source image can be usually easily spotted, because this pixel
value contributes to the destination picture. This is not completely
reliable, but still reasonably efficient in practice.
But in the case of BILINEAR scaling, the new 'cover-test' is indeed
very useful and represents a nice addition to the test suite. Thanks
for doing this work.
More information about the Pixman