commit | ce41e9d6f046881c480a81705be1922ae02a3779 | [log] [tgz] |
---|---|---|
author | Ian Hua <ianhua@google.com> | Wed Feb 02 14:15:32 2022 +0000 |
committer | Presubmit Automerger Backend <android-build-presubmit-automerger-backend@system.gserviceaccount.com> | Wed Feb 02 14:15:32 2022 +0000 |
tree | 81d7508721085bfb6debe2539a2ca5c385298795 | |
parent | 4377b97cf0850e0a61caa191586ebe68ccbc2abf [diff] | |
parent | 380f85132f1641c8792107342183fc3e44eca2fa [diff] |
[automerged blank] Merge remote-tracking branch 'aosp/upstream-master' to 'aosp/master' for external/ruy. 2p: 380f85132f Blank merge reason: Change-Id I71df6832de5c6fb310e9c2c97cdf361fc9074e80 with SHA-1 b635c099fe is in history Original change: https://googleplex-android-review.googlesource.com/c/platform/external/ruy/+/16774445 Bug: 217166138 Change-Id: Icee0bbee2df12bad2a03d7f23e2888b323ff3983
This is not an officially supported Google product.
ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.
ruy supports both floating-point and 8bit-integer-quantized matrices.
ruy is designed to achieve high performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes. It's not as fast as completely specialized code for each shape, but it aims to offer a good compromise of speed across all shapes and a small binary size.
Some documentation will eventually be available in the doc/ directory, see doc/README.md.