Use lambdas to shorten Kernel8bitAvx512's source code, and to split the resulting non-opt binary code into smaller functions. This makes no difference in opt builds, but for non-opt builds this reduces the stack frame of this function from 60k down to 24k. This avoids stack overflows in some toolchains.

PiperOrigin-RevId: 322406964
1 file changed
tree: 102720a15067d89d1b4d0658ff269ef0712f023c
  1. doc/
  2. ruy/
  3. third_party/
  4. BUILD
  5. CONTRIBUTING.md
  6. LICENSE
  7. README.md
  8. WORKSPACE
README.md

The ruy matrix multiplication library

This is not an officially supported Google product.

ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.

ruy supports both floating-point and 8bit-integer-quantized matrices.

Efficiency

ruy is designed to achieve maximal performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes.

ruy is currently only optimized for the ARM architectures (both 64-bit and 32-bit code). Optimization for the Intel x86 architecture is in progress.

ruy is currently optimized only for the following combination of storage orders: LHS = row-major, RHS = column-major, destination = column-major. All other combinations of storage orders fall back to slow reference code at the moment.

Documentation

Some documentation will eventually be available in the doc/ directory, see doc/README.md.