Better gradient handling (migration/fallbacks)
Clemens Eisserer
linuxhippy at gmail.com
Thu Nov 13 05:30:26 PST 2008
Hi,
I've experienced some performance problems with gradients when working
on the xrender/java2d backend.
A typical problematic case is when mask and desitation picture were in
VRAM, and a gradient is used as source.
As far as I understand this causes mask and dst to be moved out into
sysmem, the composition is done by pixman and at the next accalerated
operation the whole thing is moved back.
In profiles I saw that about 35% of total cycles where spent in
moveIn/moveOut and 5% in gradient generation itself, for a rather
boring UI like the following:
http://picasaweb.google.com/linuxhippy/LinuxhippySBlog?authkey=tXfo8RSnq4s#5224085419010972994
What I did to work arround the problem was to use a temporary pixmap,
copy the gradient to the pixmap and use that pixpap later for
composition.
This means only moveIn's and enhanced performance a lot, about 3-4x
for the UI workload mentioned above.
This seems to be an acceptable workarround but causes an unnescessary
burden for UMA architectures like Intel+GEM, so doing this be default
should be up to the driver.
Would it be possible to pass gradients down to the driver, to allow
the driver to decide what to do with the gradient, or even provide
accaleration for it.
How complex would it be to provide the nescessary hooks?
As far as I know two-stop gradients often can be accalerated with some
texture-mapping tricks, and everything more complex still could be
done with shaders.
I am no xorg/exa expert, so maybe I just do not understand things and
draw wrong conclusions.
Thanks, Clemens
More information about the xorg
mailing list