Significant reduction in memory usage by the compiler.
o Estimated sizes of growable lists to avoid waste
o Changed basic block predecessor structure from a growable bitmap
to a growable list.
o Conditionalized code which produced disassembly strings.
o Avoided generating some dataflow-related structures when compiling
in dataflow-disabled mode.
o Added memory usage statistics
o Eliminated floating point usage as a barrier to disabling expensive
dataflow analysis for very large init routines.
o Because iterating through sparse bit maps is much less of a concern now,
removed earlier hack that remembered runs of leading and trailing
Also, some general tuning.
o Minor tweaks to register utilties
o Speed up the assembly loop
o Rewrite of the bit vector iterator
Our previous worst-case method originally consumed 360 megabytes, but through
earlier changes was whittled down to 113 megabytes. Now it consumes 12 (which
so far appears to close to the highest compiler heap usage of anything
Post-wipe cold boot time is now less than 7 minutes.
Installation time for our application test cases also shows a large
gain - typically 25% to 40% speedup.
Single-threaded host compilation of core.jar down to <3.0s, boot.oat builds
in 17.2s. Next up: multi-threaded compilation.
20 files changed