[Pixman] [PATCH/RFC] Use OpenMP for bilinear scaled fast paths

Siarhei Siamashka siarhei.siamashka at gmail.com
Tue Jun 26 22:00:13 PDT 2012


On Wed, Jun 27, 2012 at 4:53 AM, Søren Sandmann <sandmann at cs.au.dk> wrote:
> Søren Sandmann <sandmann at cs.au.dk> writes:
>
>> The main concern from me is making sure that it doesn't cause issues in
>> the X server, which is known to do wacky things with signals and
>> possibly threads. But the answer to that is to just put it in and get it
>> tested.
>
> In some limited testing of this patch, I found that:
>
> - It did indeed cause crashes in the input system with the X server that
>  was in Fedora 14. I think these are known bugs that have been fixed in
>  newer X servers. (Should we care whether we trigger bugs in older X
>  servers?)
>
> - With the X server in Fedora 17 it does not cause crashes.
>
> - When I go to
>
>    http://ie.microsoft.com/testdrive/Performance/FishIETank/
>
>  the X server will max out 3.5 cores and firefox will use the remaining
>  half core, but judging from looking at the fish and the page's FPS
>  meter, the performance isn't actually better.
>
>  Profiling shows that 50% to 75% of the time is spent in a function in
>  libgomp.so called something like gomp_wait_for_barrier().

By quickly searching for gomp_wait_for_barrier references on the
Internet, this sounds like OMP_WAIT_POLICY [1] might be not set to
PASSIVE and the threads which have finished their job before the
others are just spinning. I'm also forcing static scheduling via
"schedule" clause which may also contribute to this problem (I thought
that dynamic scheduling might be a bad idea and cause higher overhead
for smaller images). And there is "if" clause in omp pragma, which can
be used to avoid multi-threaded processing for the cases where it
performs poorly (very small images). This stuff may need a lot of
tuning to ensure that OpenMP is always a gain and never a loss.

[1] http://gcc.gnu.org/onlinedocs/libgomp/Environment-Variables.html

-- 
Best regards,
Siarhei Siamashka


More information about the Pixman mailing list