commit | adddddb6cbcb777d92a8c464c9ad0cb9aecc76a3 | [log] [tgz] |
---|---|---|
author | Matteo Martincigh <matteo.martincigh@arm.com> | Thu Jan 24 14:06:23 2019 +0000 |
committer | Matteo Martincigh <matteo.martincigh@arm.com> | Wed Jan 30 14:03:28 2019 +0000 |
tree | b15de32bf9f8612f66e1ae23d2f8009e80e7d0e6 | |
parent | d089b74bebbcc8518fb0f4eacb7e6569ae170199 [diff] |
IVGCVSW-2458 Refactor the Optimize function (Network.cpp) so that subgraphs are optimized by the backends * Added a new method OptimizeSubGraph to the backend interface * Refactored the Optimize function so that the backend-specific optimization is performed by the backend itself (through the new OptimizeSubGraph interface method) * Added a new ApplyBackendOptimizations function to apply the new changes * Added some new convenient constructors to the SubGraph class * Added AddLayer method and a pointer to the parent graph to the SubGraph class * Updated the sub-graph unit tests to match the changes * Added SelectSubGraphs and ReplaceSubGraphConnections overloads that work with sub-graphs * Removed unused code and minor refactoring where necessary Change-Id: I46181794c6a9e3b10558944f804e06a8f693a6d0
For more information about Arm NN, see: https://developer.arm.com/products/processors/machine-learning/arm-nn
There is a getting started guide here using TensorFlow: https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow
There is a getting started guide here using TensorFlow Lite: https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow-lite
There is a getting started guide here using Caffe: https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-caffe
There is a getting started guide here using ONNX: https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-onnx
There is a guide for backend development: Backend development guide
Arm tests the build system of Arm NN with the following build environments:
Arm NN is written using portable C++14 and the build system uses CMake so it is possible to build for a wide variety of target platforms, from a wide variety of host environments.
The armnn/tests directory contains tests used during ArmNN development. Many of them depend on third-party IP, model protobufs and image files not distributed with ArmNN. The dependencies of some of the tests are available freely on the Internet, for those who wish to experiment.
The ‘ExecuteNetwork’ program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by ArmNN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run with no arguments to see command-line help.
The ‘armnn/samples’ directory contains SimpleSample.cpp. A very basic example of the ArmNN SDK API in use.
Note that Arm NN needs to be built against a particular version of ARM's Compute Library. The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named ‘clframework’ and checkouts the correct revision
Arm NN is provided under the MIT license. See LICENSE for more information. Contributions to this project are accepted under the same license.
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: MIT
This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/
The ArmNN project welcomes contributions. Please see the Contributor Guide for more details.