Build TensorFlow Lite with CMake

This page describes how to build and use the TensorFlow Lite library with CMake tool.

The following instructions have been tested on Ubuntu 16.04.3 64-bit PC (AMD64) , TensorFlow devel docker image tensorflow/tensorflow:devel and Windows 10.

Note: This feature is currently experimental and available since version 2.4 and may change.

Step 1. Install CMake tool

It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following command.

sudo apt-get install cmake

Or you can follow the official cmake installation guide

Step 2. Clone TensorFlow repository

git clone https://github.com/tensorflow/tensorflow.git tensorflow_src

Note: If you're using the TensorFlow Docker image, the repo is already provided in /tensorflow_src/.

Step 3. Create CMake build directory

mkdir tflite_build
cd tflite_build

Step 4. Run CMake tool with configurations

Release build

It generates an optimized release binary by default. If you want to build for your workstation, simply run the following command.

cmake ../tensorflow_src/tensorflow/lite

Debug build

If you need to produce a debug build which has symbol information, you need to provide -DCMAKE_BUILD_TYPE=Debug option.

cmake ../tensorflow_src/tensorflow/lite -DCMAKE_BUILD_TYPE=Debug

Cross-compilation for Android

You can use CMake to build Android binaries. You need to install Android NDK and provide the NDK path with -DDCMAKE_TOOLCHAIN_FILE flag. You also need to set target ABI with -DANDROID_ABI flag.

cmake -DCMAKE_TOOLCHAIN_FILE=<NDK path>/build/cmake/android.toolchain.cmake \
  -DANDROID_ABI=arm64-v8a ../tensorflow_src/tensorflow/lite

OpenCL GPU delegate

If your target machine has OpenCL support, you can use GPU delegate which can leverage your GPU power.

To configure OpenCL GPU delegate support:

cmake ../tensorflow_src/tensorflow/lite -DTFLITE_ENABLE_GPU=ON

Note: It‘s experimental and available only on master(r2.5) branch. There could be compatbility issues. It’s only verified with Android devices and NVidia CUDA OpenCL 1.2.

Step 5. Build TensorFlow Lite

In the tflite_build directory,

cmake --build . -j

Note: This generates a static library libtensorflow-lite.a in the current directory but the library isn't self-contained since all the transitive dependencies are not included. To use the library properly, you need to create a CMake project. Please refer the “Create a CMake project which uses TensorFlow Lite” section.

Step 6. Build TensorFlow Lite Benchmark Tool and Label Image Example(Optional)

In the tflite_build directory,

cmake --build . -j -t benchmark_model
cmake --build . -j -t label_image

Available Options to build TensorFlow Lite

Here is the list of available options. You can override it with -D<option_name>=[ON|OFF]. For example, -DTFLITE_ENABLE_XNNPACK=OFF to disable XNNPACK which is enabled by default.

Option NameFeatureDefault
TFLITE_ENABLE_RUYEnable RUY matrix multiplication libraryOFF
TFLITE_ENABLE_NNAPIEnable NNAPI delegateON (Android)
TFLITE_ENABLE_GPUEnable GPU delegateOFF
TFLITE_ENABLE_XNNPACKEnable XNNPACK delegateON
TFLITE_ENABLE_MMAPEnable MMAP (unsupported on Windows)ON

Create a CMake project which uses TensorFlow Lite

Here is the CMakeLists.txt of TFLite minimal example.

You need to have add_subdirectory() for TensorFlow Lite directory and link tensorflow-lite with target_link_libraries().

cmake_minimum_required(VERSION 3.16)
project(minimal C CXX)

set(TENSORFLOW_SOURCE_DIR "" CACHE PATH
  "Directory that contains the TensorFlow project" )
if(NOT TENSORFLOW_SOURCE_DIR)
  get_filename_component(TENSORFLOW_SOURCE_DIR
    "${CMAKE_CURRENT_LIST_DIR}/../../../../" ABSOLUTE)
endif()

add_subdirectory(
  "${TENSORFLOW_SOURCE_DIR}/tensorflow/lite"
  "${CMAKE_CURRENT_BINARY_DIR}/tensorflow-lite" EXCLUDE_FROM_ALL)

add_executable(minimal minimal.cc)
target_link_libraries(minimal tensorflow-lite ${CMAKE_DL_LIBS}