commit | f529da723e2632a0599775d43989881bf40d3ff0 | [log] [tgz] |
---|---|---|
author | Xin Li <delphij@google.com> | Fri Oct 06 09:50:53 2023 +0000 |
committer | Automerger Merge Worker <android-build-automerger-merge-worker@system.gserviceaccount.com> | Fri Oct 06 09:50:53 2023 +0000 |
tree | 1941139d12388f7681b0c072e8292dd40bc840c2 | |
parent | 21014fd28edc7b19ff17eb4098e07d51ee350c4a [diff] | |
parent | ce3b94676b8f3398db853e57b04b2007c572dbab [diff] |
[automerger skipped] Merge Android 14 am: 6baa832718 -s ours am: c0b29e52ca -s ours am: ce3b94676b -s ours am skip reason: Merged-In Ica1fa16c35fc33aa7c76795607169970d583ce28 with SHA-1 050ae879b9 is already in history Original change: https://android-review.googlesource.com/c/platform/external/ruy/+/2775086 Change-Id: I58a37680478b636d9b3e6fc4b887c7596629eea1 Signed-off-by: Automerger Merge Worker <android-build-automerger-merge-worker@system.gserviceaccount.com>
This is not an officially supported Google product.
ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.
ruy supports both floating-point and 8bit-integer-quantized matrices.
ruy is designed to achieve high performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes. It's not as fast as completely specialized code for each shape, but it aims to offer a good compromise of speed across all shapes and a small binary size.
Some documentation will eventually be available in the doc/ directory, see doc/README.md.