Android 12.0.0 release 20
[LSC] Add LOCAL_LICENSE_KINDS to external/armnn

Added SPDX-license-identifier-BSD SPDX-license-identifier-BSL-1.0
    SPDX-license-identifier-MIT SPDX-license-identifier-PSF-2.0
    legacy_unencumbered
to:
  Android.bp
  Android.mk

Bug: 68860345
Bug: 151177513
Bug: 151953481

Test: m all

Exempt-From-Owner-Approval: janitorial work
Change-Id: Ibb2a6e64b4701be7ebeb48852765ba78fd5a9b2b
1 file changed
tree: 5561ca57639650f9893e926183c235676ced22c8
  1. cmake/
  2. delegate/
  3. docker/
  4. docs/
  5. include/
  6. profiling/
  7. python/
  8. samples/
  9. scripts/
  10. src/
  11. tests/
  12. third-party/
  13. Android.bp
  14. Android.mk
  15. BuildGuideAndroidNDK.md
  16. BuildGuideCrossCompilation.md
  17. CMakeLists.txt
  18. ContributorGuide.md
  19. InstallationViaAptRepository.md
  20. LICENSE
  21. OWNERS
  22. README.md
  23. SECURITY.md
README.md

Arm NN

Arm NN is a key component of the machine learning platform, which is part of the Linaro Machine Intelligence Initiative. For more information on the machine learning platform and Arm NN, see: https://mlplatform.org/, also there is further Arm NN information available from https://developer.arm.com/products/processors/machine-learning/arm-nn

There is a getting started guide here using TensorFlow: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow

There is a getting started guide here using TensorFlow Lite: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow-lite

There is a getting started guide here using Caffe: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configure-the-arm-nn-sdk-build-environment-for-caffe

There is a getting started guide here using ONNX: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-onnx

There is a guide for backend development: Backend development guide

There is a guide for installation of ArmNN, Tensorflow Lite Parser and PyArmnn via our Apt Repository: Installation via Apt Repository

There is a getting started guide for our ArmNN TfLite Delegate: Build the TfLite Delegate natively

API Documentation is available at https://github.com/ARM-software/armnn/wiki/Documentation.

Dox files to generate Arm NN doxygen files can be found at armnn/docs/. Following generation the xhtml files can be found at armnn/documentation/

Build Instructions

Arm tests the build system of Arm NN with the following build environments:

Arm NN is written using portable C++14 and the build system uses CMake, therefore it is possible to build for a wide variety of target platforms, from a wide variety of host environments.

The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on the Internet, for those who wish to experiment.

The ‘armnn/samples’ directory contains SimpleSample.cpp, a very basic example of the ArmNN SDK API in use, and DynamicSample.cpp, a very basic example of using the ArmNN SDK API with the standalone sample dynamic backend.

The ‘ExecuteNetwork’ program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run it with no arguments to see command-line help.

The ‘ArmnnConverter’ program, in armnn/src/armnnConverter, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a model in TensorFlow format and produces a serialized model in Arm NN format. Run it with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool src/armnnSerializer.

The ‘ArmnnQuantizer’ program, in armnn/src/armnnQuantizer, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a 32-bit float network and converts it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network. Static quantization is supported by default but dynamic quantization can be enabled if CSV file of raw input tensors is specified. Run it with no arguments to see command-line help.

Note that Arm NN needs to be built against a particular version of ARM's Compute Library. The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named ‘clframework’ and checks out the correct revision.

For FAQs and troubleshooting advice, see FAQ.md

License

Arm NN is provided under the MIT license. See LICENSE for more information. Contributions to this project are accepted under the same license.

Individual files contain the following tag instead of the full license text.

SPDX-License-Identifier: MIT

This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/

Third party tools used by Arm NN:

ToolLicense (SPDX ID)DescriptionVersionProvenience
cxxoptsMITA lightweight C++ option parser librarySHA 12e496da3d486b87fa9df43edea65232ed852510https://github.com/jarro2783/cxxopts
fmtMIT{fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams.7.0.1https://github.com/fmtlib/fmt
ghcMITA header-only single-file std::filesystem compatible helper library1.3.2https://github.com/gulrak/filesystem
halfMITIEEE 754 conformant 16-bit half-precision floating point library1.12.0http://half.sourceforge.net
mapbox/variantBSDA header-only alternative to ‘boost::variant’1.1.3https://github.com/mapbox/variant
stbMITImage loader, resize and writer2.16https://github.com/nothings/stb

Contributions

The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the Contributing page on the MLPlatform.org website, or see the Contributor Guide.