blob: a684d24a479bf6c4948132eb002c857512e3a515 [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "_DDaAex5Q7u-"
},
"source": [
"##### Copyright 2019 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"colab": {},
"colab_type": "code",
"id": "W1dWWdNHQ9L0"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "6Y8E0lw5eYWm"
},
"source": [
"# Post-training integer quantization"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "CIGrZZPTZVeO"
},
"source": [
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_integer_quant\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "BTC1rDAuei_1"
},
"source": [
"## Overview\n",
"\n",
"[TensorFlow Lite](https://www.tensorflow.org/lite/) now supports\n",
"converting all model values (weights and activations) to 8-bit integers when converting from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 4x reduction in model size and a 3 to 4x performance improvement on CPU performance. In addition, this fully quantized model can be consumed by integer-only hardware accelerators.\n",
"\n",
"In contrast to [post-training \"on-the-fly\" quantization](https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/tutorials/post_training_quant.ipynb)—which stores only the weights as 8-bit integers—this technique statically quantizes all weights *and* activations during model conversion.\n",
"\n",
"In this tutorial, you'll train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the saved model into a Tensorflow Lite flatbuffer\n",
"with full quantization. Finally, you'll check the\n",
"accuracy of the converted model and compare it to the original float model.\n",
"\n",
"The training script, `mnist.py`, is available from the\n",
"[TensorFlow official MNIST tutorial](https://github.com/tensorflow/models/tree/master/official/mnist).\n",
"\n",
"**Note:** Currently, TensorFlow 2.x does not allow you to specify the model's input/output type when using post-training quantization. So this tutorial uses TensorFlow 1.x in order to use the ```inference_input_type``` and ```inference_output_type``` options with the TFLiteConverter—allowing for complete quantization end-to-end. Work is ongoing to bring this functionality to TensorFlow 2.x.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "2XsEP17Zelz9"
},
"source": [
"## Build an MNIST model"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "dDqqUIZjZjac"
},
"source": [
"### Setup"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "WsN6s5L1ieNl"
},
"outputs": [],
"source": [
"try:\n",
" # %tensorflow_version only exists in Colab.\n",
" %tensorflow_version 1.x\n",
"except Exception:\n",
" pass\n",
"import tensorflow as tf\n",
"\n",
"tf.enable_eager_execution()"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "00U0taBoe-w7"
},
"outputs": [],
"source": [
"! git clone --depth 1 https://github.com/tensorflow/models"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "4XZPtSh-fUOc"
},
"outputs": [],
"source": [
"import sys\n",
"import os\n",
"\n",
"if sys.version_info.major >= 3:\n",
" import pathlib\n",
"else:\n",
" import pathlib2 as pathlib\n",
"\n",
"# Add `models` to the python path.\n",
"models_path = os.path.join(os.getcwd(), \"models\")\n",
"sys.path.append(models_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "eQ6Q0qqKZogR"
},
"source": [
"### Train and export the model"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "eMsw_6HujaqM"
},
"outputs": [],
"source": [
"saved_models_root = \"/tmp/mnist_saved_model\""
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "hWSAjQWagIHl"
},
"outputs": [],
"source": [
"# The above path addition is not visible to subprocesses, add the path for the subprocess as well.\n",
"# Note: channels_last is required here or the conversion may fail. \n",
"!PYTHONPATH={models_path} python models/official/r1/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "5NMaNZQCkW9X"
},
"source": [
"This training won't take long because you're training the model for just a single epoch, which trains to about 96% accuracy."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "xl8_fzVAZwOh"
},
"source": [
"### Convert to a TensorFlow Lite model\n",
"\n",
"Using the [Python `TFLiteConverter`](https://www.tensorflow.org/lite/convert/python_api), you can now convert the trained model into a TensorFlow Lite model.\n",
"\n",
"The trained model is saved in the `saved_models_root` directory, which is named with a timestamp. So select the most recent directory: "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Xp5oClaZkbtn"
},
"outputs": [],
"source": [
"saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob(\"*\"))[-1])\n",
"saved_model_dir"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "AT8BgkKmljOy"
},
"source": [
"Now load the model using the `TFLiteConverter`:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "_i8B2nDZmAgQ"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"tf.enable_eager_execution()\n",
"tf.logging.set_verbosity(tf.logging.DEBUG)\n",
"\n",
"converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)\n",
"tflite_model = converter.convert()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "F2o2ZfF0aiCx"
},
"source": [
"Write it out to a `.tflite` file:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "vptWZq2xnclo"
},
"outputs": [],
"source": [
"tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\n",
"tflite_models_dir.mkdir(exist_ok=True, parents=True)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Ie9pQaQrn5ue"
},
"outputs": [],
"source": [
"tflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\n",
"tflite_model_file.write_bytes(tflite_model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "7BONhYtYocQY"
},
"source": [
"Now you have a trained MNIST model that's converted to a `.tflite` file, but it's still using 32-bit float values for all parameter data.\n",
"\n",
"So let's convert the model again, this time using quantization...\n",
"\n",
"#### Convert using quantization",
"\n",
"First, first set the `optimizations` flag to optimize for size:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "HEZ6ET1AHAS3"
},
"outputs": [],
"source": [
"tf.logging.set_verbosity(tf.logging.INFO)\n",
"converter.optimizations = [tf.lite.Optimize.DEFAULT]"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "rTe8avZJHMDO"
},
"source": [
"Now, in order to create quantized values with an accurate dynamic range of activations, you need to provide a representative dataset:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "FiwiWU3gHdkW"
},
"outputs": [],
"source": [
"mnist_train, _ = tf.keras.datasets.mnist.load_data()\n",
"images = tf.cast(mnist_train[0], tf.float32)/255.0\n",
"mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)\n",
"def representative_data_gen():\n",
" for input_value in mnist_ds.take(100):\n",
" yield [input_value]\n",
"\n",
"converter.representative_dataset = representative_data_gen"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "xW84iMYjHd9t"
},
"source": [
"Finally, convert the model to TensorFlow Lite format:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "yuNfl3CoHNK3"
},
"outputs": [],
"source": [
"tflite_model_quant = converter.convert()\n",
"tflite_model_quant_file = tflite_models_dir/\"mnist_model_quant.tflite\"\n",
"tflite_model_quant_file.write_bytes(tflite_model_quant)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "PhMmUTl4sbkz"
},
"source": [
"Note how the resulting file is approximately `1/4` the size:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "JExfcfLDscu4"
},
"outputs": [],
"source": [
"!ls -lh {tflite_models_dir}"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "RACBJuj2XO8x"
},
"source": [
"Your model should now be fully quantized. However, if you convert a model that includes any operations that TensorFlow Lite cannot quantize, those ops are left in floating point. This allows for conversion to complete so you have a smaller and more efficient model, but the model won't be compatible with some ML accelerators that require full integer quantization. Also, by default, the converted model still use float input and outputs, which also is not compatible with some accelerators.\n",
"\n",
"So to ensure that the converted model is fully quantized (make the converter throw an error if it encounters an operation it cannot quantize), and to use integers for the model's input and output, you need to convert the model again using these additional configurations:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "kzjEjcDs3BHa"
},
"outputs": [],
"source": [
"converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\n",
"converter.inference_input_type = tf.uint8\n",
"converter.inference_output_type = tf.uint8\n",
"\n",
"tflite_model_quant = converter.convert()\n",
"tflite_model_quant_file = tflite_models_dir/\"mnist_model_quant_io.tflite\"\n",
"tflite_model_quant_file.write_bytes(tflite_model_quant)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "wYd6NxD03yjB"
},
"source": [
"In this example, the resulting model size remains the same because all operations successfully quantized to begin with. However, this new model now uses quantized input and output, making it compatible with more accelerators, such as the Coral Edge TPU.\n",
"\n",
"In the following sections, notice that we are now handling two TensorFlow Lite models: `tflite_model_file` is the converted model that still uses floating-point parameters, and `tflite_model_quant_file` is the same model converted with full integer quantization, including uint8 input and output."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L8lQHMp_asCq"
},
"source": [
"## Run the TensorFlow Lite models"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "-5l6-ciItvX6"
},
"source": [
"Run the TensorFlow Lite model using the Python TensorFlow Lite\n",
"Interpreter. \n",
"\n",
"### Load the test data\n",
"\n",
"First, let's load the MNIST test data to feed to the model. Because the quantized model expects uint8 input data, we need to create a separate dataset for that model:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "eTIuU07NuKFL"
},
"outputs": [],
"source": [
"import numpy as np\n",
"_, mnist_test = tf.keras.datasets.mnist.load_data()\n",
"labels = mnist_test[1]\n",
"\n",
"# Load data for float model\n",
"images = tf.cast(mnist_test[0], tf.float32)/255.0\n",
"mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)\n",
"\n",
"# Load data for quantized model\n",
"images_uint8 = tf.cast(mnist_test[0], tf.uint8)\n",
"mnist_ds_uint8 = tf.data.Dataset.from_tensor_slices((images_uint8, labels)).batch(1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Ap_jE7QRvhPf"
},
"source": [
"### Load the model into the interpreters"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Jn16Rc23zTss"
},
"outputs": [],
"source": [
"interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\n",
"interpreter.allocate_tensors()"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "J8Pztk1mvNVL"
},
"outputs": [],
"source": [
"interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))\n",
"interpreter_quant.allocate_tensors()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "2opUt_JTdyEu"
},
"source": [
"### Test the models on one image\n",
"\n",
"First test it on the float model:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "AKslvo2kwWac"
},
"outputs": [],
"source": [
"for img, label in mnist_ds:\n",
" break\n",
"\n",
"interpreter.set_tensor(interpreter.get_input_details()[0][\"index\"], img)\n",
"interpreter.invoke()\n",
"predictions = interpreter.get_tensor(\n",
" interpreter.get_output_details()[0][\"index\"])"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "XZClM2vo3_bm"
},
"outputs": [],
"source": [
"import matplotlib.pylab as plt\n",
"\n",
"plt.imshow(img[0])\n",
"template = \"True:{true}, predicted:{predict}\"\n",
"_ = plt.title(template.format(true= str(label[0].numpy()),\n",
" predict=str(predictions[0])))\n",
"plt.grid(False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "o3N6-UGl1dfE"
},
"source": [
"Now test the quantized model (using the uint8 data):"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "3gwhv4lKbYZ4"
},
"outputs": [],
"source": [
"for img, label in mnist_ds_uint8:\n",
" break\n",
"\n",
"interpreter_quant.set_tensor(\n",
" interpreter_quant.get_input_details()[0][\"index\"], img)\n",
"interpreter_quant.invoke()\n",
"predictions = interpreter_quant.get_tensor(\n",
" interpreter_quant.get_output_details()[0][\"index\"])"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "CIH7G_MwbY2x"
},
"outputs": [],
"source": [
"plt.imshow(img[0])\n",
"template = \"True:{true}, predicted:{predict}\"\n",
"_ = plt.title(template.format(true= str(label[0].numpy()),\n",
" predict=str(predictions[0])))\n",
"plt.grid(False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "LwN7uIdCd8Gw"
},
"source": [
"### Evaluate the models"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "05aeAuWjvjPx"
},
"outputs": [],
"source": [
"def eval_model(interpreter, mnist_ds):\n",
" total_seen = 0\n",
" num_correct = 0\n",
"\n",
" input_index = interpreter.get_input_details()[0][\"index\"]\n",
" output_index = interpreter.get_output_details()[0][\"index\"]\n",
"\n",
" for img, label in mnist_ds:\n",
" total_seen += 1\n",
" interpreter.set_tensor(input_index, img)\n",
" interpreter.invoke()\n",
" predictions = interpreter.get_tensor(output_index)\n",
" if predictions == label.numpy():\n",
" num_correct += 1\n",
"\n",
" if total_seen % 500 == 0:\n",
" print(\"Accuracy after %i images: %f\" %\n",
" (total_seen, float(num_correct) / float(total_seen)))\n",
"\n",
" return float(num_correct) / float(total_seen)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "T5mWkSbMcU5z"
},
"outputs": [],
"source": [
"# Create smaller dataset for demonstration purposes\n",
"mnist_ds_demo = mnist_ds.take(2000)\n",
"\n",
"print(eval_model(interpreter, mnist_ds_demo))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Km3cY9ry8ZlG"
},
"source": [
"Repeat the evaluation on the fully quantized model using the uint8 data:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "-9cnwiPp6EGm"
},
"outputs": [],
"source": [
"# NOTE: Colab runs on server CPUs, and TensorFlow Lite currently\n",
"# doesn't have super optimized server CPU kernels. So this part may be\n",
"# slower than the above float interpreter. But for mobile CPUs, considerable\n",
"# speedup can be observed.\n",
"mnist_ds_demo_uint8 = mnist_ds_uint8.take(2000)\n",
"\n",
"print(eval_model(interpreter_quant, mnist_ds_demo_uint8))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L7lfxkor8pgv"
},
"source": [
"In this example, you have fully quantized a model with almost no difference in the accuracy, compared to the above float model."
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"last_runtime": {
"build_target": "//research/colab/notebook:notebook_backend_py3",
"kind": "private"
},
"name": "post_training_integer_quant.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true,
"version": "0.3.2"
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}