tree: 234e9b0476b89df4cef635330ff8f8c0268dc2f7 [path history] [tgz]
  1. BUILD.bazel
  2. canonical_perf.sh
  3. cuj_catalog.py
  4. incremental_build.py
  5. perf_metrics.py
  6. perf_metrics_test.py
  7. pretty.py
  8. README.md
  9. ui.py
  10. util.py
  11. util_test.py
scripts/incremental_build/README.md

How to Use

For automated use (e.g. in CI), use main.py. See its help with main.py --help. Note that metrics collection relies on printproto and jq tools being on $PATH.

The most basic invocation, e.g. ./incremental_build.py libc, is logically equivalent to

  1. running m --skip-soong-tests libc and then
  2. parsing $OUTDIR/soong_metrics and $OUTDIR/bp2build_metrics.pb files
  3. Adding timing-related metrics from those files into out/timing_logs/metrics.csv

There are a number of CUJs set up in cuj_catalog.py and they are run sequentially, such that each row in metrics.csv are the timings of various " events" during an incremental build.

You may also add rows to metrics.csv after a manual run, using perf_metrics.py script. This is particularly useful when you don't want to modify cuj_catalog.py for one-off tests.

Currently:

  1. run a build (conceptually, m droid)
  2. printproto to parse metrics related pb files
  3. use jq to filter data
  4. collate data into a csv file
  5. goto 1 until various CUJs are exhausted

For CI, we should:

  1. run a build with some identifiable tag (not sure what mechanisms are available)
  2. goto 1 until various CUJs are exhausted
  3. rely on plx to collate data from all builds and provide a filtering mechanism based on that tag from step 1