This page describes how to build and use the TensorFlow Lite library with CMake tool.
The following instructions have been tested on Ubuntu 16.04.3 64-bit PC (AMD64) , TensorFlow devel docker image tensorflow/tensorflow:devel and Windows 10.
Note: This feature is currently experimental and available since version 2.4 and may change.
It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following command.
sudo apt-get install cmake
Or you can follow the official cmake installation guide
git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
Note: If you're using the TensorFlow Docker image, the repo is already provided in /tensorflow_src/
.
mkdir tflite_build
cd tflite_build
It generates an optimized release binary by default. If you want to build for your workstation, simply run the following command.
cmake ../tensorflow_src/tensorflow/lite
If you need to produce a debug build which has symbol information, you need to provide -DCMAKE_BUILD_TYPE=Debug
option.
cmake ../tensorflow_src/tensorflow/lite -DCMAKE_BUILD_TYPE=Debug
You can use CMake to build Android binaries. You need to install Android NDK and provide the NDK path with -DDCMAKE_TOOLCHAIN_FILE
flag. You also need to set target ABI with -DANDROID_ABI
flag.
cmake -DCMAKE_TOOLCHAIN_FILE=<NDK path>/build/cmake/android.toolchain.cmake \ -DANDROID_ABI=arm64-v8a ../tensorflow_src/tensorflow/lite
If your target machine has OpenCL support, you can use GPU delegate which can leverage your GPU power.
To configure OpenCL GPU delegate support:
cmake ../tensorflow_src/tensorflow/lite -DTFLITE_ENABLE_GPU=ON
Note: It‘s experimental and available only on master(r2.5) branch. There could be compatbility issues. It’s only verified with Android devices and NVidia CUDA OpenCL 1.2.
In the tflite_build directory,
cmake --build . -j
Note: This generates a static library libtensorflow-lite.a
in the current directory but the library isn't self-contained since all the transitive dependencies are not included. To use the library properly, you need to create a CMake project. Please refer the “Create a CMake project which uses TensorFlow Lite” section.
In the tflite_build directory,
cmake --build . -j -t benchmark_model
cmake --build . -j -t label_image
Here is the list of available options. You can override it with -D<option_name>=[ON|OFF]
. For example, -DTFLITE_ENABLE_XNNPACK=OFF
to disable XNNPACK which is enabled by default.
Option Name | Feature | Default |
---|---|---|
TFLITE_ENABLE_RUY | Enable RUY matrix multiplication library | OFF |
TFLITE_ENABLE_NNAPI | Enable NNAPI delegate | ON (Android) |
TFLITE_ENABLE_GPU | Enable GPU delegate | OFF |
TFLITE_ENABLE_XNNPACK | Enable XNNPACK delegate | ON |
TFLITE_ENABLE_MMAP | Enable MMAP (unsupported on Windows) | ON |
Here is the CMakeLists.txt of TFLite minimal example.
You need to have add_subdirectory() for TensorFlow Lite directory and link tensorflow-lite
with target_link_libraries().
cmake_minimum_required(VERSION 3.16) project(minimal C CXX) set(TENSORFLOW_SOURCE_DIR "" CACHE PATH "Directory that contains the TensorFlow project" ) if(NOT TENSORFLOW_SOURCE_DIR) get_filename_component(TENSORFLOW_SOURCE_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../" ABSOLUTE) endif() add_subdirectory( "${TENSORFLOW_SOURCE_DIR}/tensorflow/lite" "${CMAKE_CURRENT_BINARY_DIR}/tensorflow-lite" EXCLUDE_FROM_ALL) add_executable(minimal minimal.cc) target_link_libraries(minimal tensorflow-lite ${CMAKE_DL_LIBS}