linking performance (was: Re: OK to get rid of scaddins?)
Michael Meeks
michael.meeks at suse.com
Tue Feb 14 04:22:07 PST 2012
Hi Michael,
On Tue, 2012-02-14 at 13:07 +0100, Michael Stahl wrote:
> the problem is more likely that in tail_build we first compile all the
> object files, and only after they have all been built they are linked
> into libraries/executables.
Perhaps; could it also be that we like to compile with gcc in some
eight way parallel way, but when it comes to linking, we -really- don't
want to bog our machine down in that way ? I wonder if we could
explicitly limit parallelism of linking in some way (?) - we should
prolly also do this for the java compilation which is often quite memory
intensive and doesn't do well with umpteen in parallel (at least on my
machine).
> that probably results in sub-optimal use of disk cache :)
Riight; but the parallelism creating a problem with a working set that
is avoidably larger than memory footprint sounds like a more likely
culprit (perhaps) ?
> > Either way, it sucks to hinder ourselves from creating a more efficient
> > library structure because of un-necessary performance problems in the
> > toolchain ;-)
>
> well perhaps there are really 2 different target audiences, product
> builds need fast start-up, while development builds need fast re-build
> cycles...
Completely :-) trading one vs. the other sucks - hence the merged libs
option, that we can use for product builds, and the non-merged version
that we can use all these little, bitty libraries for (I guess). At
least, until we have working incremental linking - it's always
encouraging to see things like:
http://gcc.gnu.org/wiki/GoldIncrementalLinking but - I wonder how the
mozilla guys cope with this problem.
All the best,
Michael.
--
michael.meeks at suse.com <><, Pseudo Engineer, itinerant idiot
More information about the LibreOffice
mailing list