commit | 8c4088801143b4a1d35879521d928a5e44afee70 | [log] [tgz] |
---|---|---|
author | Treehugger Robot <treehugger-gerrit@google.com> | Mon Apr 20 20:06:15 2020 +0000 |
committer | Gerrit Code Review <noreply-gerritcodereview@google.com> | Mon Apr 20 20:06:15 2020 +0000 |
tree | 695001df9f2d01cfc3b7f89e0ac4770b25227f0e | |
parent | 49fa680a7ed3606abc618421896b931f8f5c0204 [diff] | |
parent | 400e404d10e47f3cc56e4bcd49142ae7f947e23f [diff] |
Merge "Update Android.bp following XNNPACK rebase"
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 82 | 86 | 88 |
MobileNet v2 1.0X | 49 | 53 | 55 |
MobileNet v3 Large | 39 | 42 | 44 |
MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 43 | 27 | 46 |
MobileNet v2 1.0X | 26 | 18 | 28 |
MobileNet v3 Large | 22 | 16 | 24 |
MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms |
---|---|---|---|
MobileNet v1 1.0X | 341 | 115 | 75 |
MobileNet v2 1.0X | 197 | 79 | 44 |
MobileNet v3 Large | 165 | 67 | 41 |
MobileNet v3 Small | 53 | 23 | 14 |
Benchmarked on February 12, 2020 with end2end-bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.