Introduce pot_log2, which checks that its argument is a power of two then returns its log2.
Allows to convey in code that values are power of two, so the log2 is exact.

PiperOrigin-RevId: 270107047
3 files changed
tree: a86f2ccffe7b9b7a61d32de568e2a329ad8a4dfc
  1. allocator.cc
  2. allocator.h
  3. allocator_test.cc
  4. benchmark.cc
  5. block_map.cc
  6. block_map.h
  7. blocking_counter.cc
  8. blocking_counter.h
  9. BUILD
  10. build_defs.bzl
  11. check_macros.h
  12. common.h
  13. context.cc
  14. context.h
  15. context_test.cc
  16. detect_arm.cc
  17. detect_arm.h
  18. detect_x86.cc
  19. detect_x86.h
  20. dispatch.h
  21. example.cc
  22. example_advanced.cc
  23. have_built_path_for.h
  24. have_built_path_for_avx2.cc
  25. have_built_path_for_avx512.cc
  26. internal_matrix.h
  27. kernel.h
  28. kernel_arm.h
  29. kernel_arm32.cc
  30. kernel_arm64.cc
  31. kernel_avx2.cc
  32. kernel_avx512.cc
  33. kernel_common.h
  34. kernel_x86.h
  35. matrix.h
  36. opt_set.h
  37. pack.h
  38. pack_arm.cc
  39. pack_arm.h
  40. pack_avx2.cc
  41. pack_avx512.cc
  42. pack_common.h
  43. pack_x86.h
  44. path.h
  45. platform.h
  46. pmu.cc
  47. pmu.h
  48. prepack.h
  49. README.md
  50. ruy.h
  51. ruy_advanced.h
  52. ruy_test.bzl
  53. ruy_test_ext.bzl
  54. side_pair.h
  55. size_util.h
  56. size_util_test.cc
  57. spec.h
  58. test.h
  59. test_fast.cc
  60. test_slow.cc
  61. test_special_specs.cc
  62. thread_pool.cc
  63. thread_pool.h
  64. time.h
  65. trace.cc
  66. trace.h
  67. trmul.cc
  68. trmul.h
  69. trmul_params.h
  70. tune.cc
  71. tune.h
  72. tune_test.cc
  73. tune_tool.cc
  74. wait.cc
  75. wait.h
  76. wait_test.cc
README.md

ruy is not BLAS

ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of TensorFlow Lite.

ruy supports both floating-point (like Eigen) and quantized (like gemmlowp).

Status

ruy is very new, immature code. It has quite good test coverage, but the code is in flux, lacks comments, needs more cleanup, and there are no design docs at the moment.

We hope to improve on all that and integrate ruy into TensorFlow Lite, at first as a non-default path for ARM A64 only, over the next few weeks [April 2019].

Efficiency

ruy is designed to achieve maximal performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes.

ruy is currently only optimized for ARM A64; other architectures have only slow reference code at the moment.

ruy is currently optimized only for the following combination of storage orders: LHS = row-major, RHS = column-major, destination = column-major. All other combinations of storage orders fall back to slow reference code at the moment.