Update TF Lite redirects
PiperOrigin-RevId: 446865142
diff --git a/tensorflow/lite/g3doc/android/play_services.md b/tensorflow/lite/g3doc/android/play_services.md
index a6f2d82..70ae550 100644
--- a/tensorflow/lite/g3doc/android/play_services.md
+++ b/tensorflow/lite/g3doc/android/play_services.md
@@ -368,5 +368,5 @@
application with TensorFlow Lite, see the
[TensorFlow Lite Developer Guide](https://www.tensorflow.org/lite/guide). You
can find additional TensorFlow Lite models for image classification, object
-detection, and other applications on the TensorFlow Lite
-[Model library](https://www.tensorflow.org/lite/guide/hosted_models) page.
+detection, and other applications on the
+[TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite).
diff --git a/tensorflow/lite/g3doc/convert/metadata.md b/tensorflow/lite/g3doc/convert/metadata.md
index 55e7fc4..18508d3 100644
--- a/tensorflow/lite/g3doc/convert/metadata.md
+++ b/tensorflow/lite/g3doc/convert/metadata.md
@@ -7,14 +7,13 @@
* human readable parts which convey the best practice when using the model,
and
* machine readable parts that can be leveraged by code generators, such as the
- [TensorFlow Lite Android code generator](../inference_with_metadata/codegen.md#generate-code-with-tensorflow-lite-android-code-generator)
+ [TensorFlow Lite Android code generator](../inference_with_metadata/codegen#generate-code-with-tensorflow-lite-android-code-generator)
and the
- [Android Studio ML Binding feature](../inference_with_metadata/codegen.md#generate-code-with-android-studio-ml-model-binding).
+ [Android Studio ML Binding feature](../inference_with_metadata/codegen#generate-code-with-android-studio-ml-model-binding).
All image models published on
-[TensorFlow Lite hosted models](https://www.tensorflow.org/lite/guide/hosted_models)
-and [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite) have been
-populated with metadata.
+[TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite) have been populated
+with metadata.
## Model with metadata format
@@ -77,9 +76,9 @@
[SubGraphMetadata.output_tensor_metadata](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L599).
Since TensorFlow Lite only supports single subgraph at this point, the
-[TensorFlow Lite code generator](../inference_with_metadata/codegen.md#generate-code-with-tensorflow-lite-android-code-generator)
+[TensorFlow Lite code generator](../inference_with_metadata/codegen#generate-code-with-tensorflow-lite-android-code-generator)
and the
-[Android Studio ML Binding feature](../inference_with_metadata/codegen.md#generate-code-with-android-studio-ml-model-binding)
+[Android Studio ML Binding feature](../inference_with_metadata/codegen#generate-code-with-android-studio-ml-model-binding)
will use `ModelMetadata.name` and `ModelMetadata.description`, instead of
`SubGraphMetadata.name` and `SubGraphMetadata.description`, when displaying
metadata and generating code.
@@ -115,7 +114,7 @@
The associated file information can be recorded in the metadata. Depending on
the file type and where the file is attached to (i.e. `ModelMetadata`,
`SubGraphMetadata`, and `TensorMetadata`),
-[the TensorFlow Lite Android code generator](../inference_with_metadata/codegen.md)
+[the TensorFlow Lite Android code generator](../inference_with_metadata/codegen)
may apply corresponding pre/post processing automatically to the object. See
[the \<Codegen usage\> section of each associate file type](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L77-L127)
in the schema for more details.
@@ -460,7 +459,7 @@
```
To use nightly snapshots, make sure that you have added
-[Sonatype snapshot repository](../guide/build_android#use_nightly_snapshots).
+[Sonatype snapshot repository](https://www.tensorflow.org/lite/android/lite_build#use_nightly_snapshots).
You can initialize a `MetadataExtractor` object with a `ByteBuffer` that points
to the model:
diff --git a/tensorflow/lite/g3doc/examples/image_classification/overview.md b/tensorflow/lite/g3doc/examples/image_classification/overview.md
index d98a54dd..27bc878 100644
--- a/tensorflow/lite/g3doc/examples/image_classification/overview.md
+++ b/tensorflow/lite/g3doc/examples/image_classification/overview.md
@@ -110,7 +110,7 @@
represents one or more of the classes that the model was trained on. It cannot
tell you the position or identity of objects within the image. If you need to
identify objects and their positions within images, you should use an
-<a href="../object_detection/overview.md">object detection</a> model.
+<a href="../object_detection/overview">object detection</a> model.
<h4>Ambiguous results</h4>
@@ -150,8 +150,9 @@
TensorFlow Lite provides you with a variety of image classification models which
are all trained on the original dataset. Model architectures like MobileNet,
-Inception, and NASNet are available on the
-<a href="../../guide/hosted_models.md">hosted models page</a>. To choose the best model for
+Inception, and NASNet are available on
+<a href="https://tfhub.dev/s?deployment-format=lite">TensorFlow Hub</a>. To
+choose the best model for
your use case, you need to consider the individual architectures as well as some
of the tradeoffs between various models. Some of these model tradeoffs are based
on metrics such as performance, accuracy, and model size. For example, you might
@@ -175,8 +176,8 @@
For the following use cases, you should use a different type of model:
<ul>
- <li>Predicting the type and position of one or more objects within an image (see <a href="../object_detection/overview.md">Object detection</a>)</li>
- <li>Predicting the composition of an image, for example subject versus background (see <a href="../segmentation/overview.md">Segmentation</a>)</li>
+ <li>Predicting the type and position of one or more objects within an image (see <a href="../object_detection/overview">Object detection</a>)</li>
+ <li>Predicting the composition of an image, for example subject versus background (see <a href="../segmentation/overview">Segmentation</a>)</li>
</ul>
Once you have the starter model running on your target device, you can
@@ -263,11 +264,10 @@
image. For example, a model with a stated accuracy of 60% can be expected to
classify an image correctly an average of 60% of the time.
-The [list of hosted models](../../guide/hosted_models.md) provides Top-1 and
-Top-5 accuracy statistics. Top-1 refers to how often the correct label appears
-as the label with the highest probability in the model’s output. Top-5 refers to
-how often the correct label appears in the 5 highest probabilities in the
-model’s output.
+The most relevant accuracy metrics are Top-1 and Top-5. Top-1 refers to how
+often the correct label appears as the label with the highest probability in the
+model’s output. Top-5 refers to how often the correct label appears in the 5
+highest probabilities in the model’s output.
The TensorFlow Lite quantized MobileNet models’ Top-5 accuracy range from 64.4
to 89.9%.
diff --git a/tensorflow/lite/g3doc/guide/faq.md b/tensorflow/lite/g3doc/guide/faq.md
index 7094616..bc0cd5f 100644
--- a/tensorflow/lite/g3doc/guide/faq.md
+++ b/tensorflow/lite/g3doc/guide/faq.md
@@ -8,18 +8,18 @@
#### What formats are supported for conversion from TensorFlow to TensorFlow Lite?
-The supported formats are listed [here](../convert/index.md#python_api)
+The supported formats are listed [here](../convert/index#python_api)
#### Why are some operations not implemented in TensorFlow Lite?
In order to keep TFLite lightweight, only certain TF operators (listed in the
-[allowlist](op_select_allowlist.md)) are supported in TFLite.
+[allowlist](op_select_allowlist)) are supported in TFLite.
#### Why doesn't my model convert?
Since the number of TensorFlow Lite operations is smaller than TensorFlow's,
some models may not be able to convert. Some common errors are listed
-[here](../convert/index.md#conversion-errors).
+[here](../convert/index#conversion-errors).
For conversion issues not related to missing operations or control flow ops,
search our
@@ -30,7 +30,7 @@
The best way to test is to compare the outputs of the TensorFlow and the
TensorFlow Lite models for the same inputs (test data or random inputs) as shown
-[here](inference.md#load-and-run-a-model-in-python).
+[here](inference#load-and-run-a-model-in-python).
#### How do I determine the inputs/outputs for GraphDef protocol buffer?
@@ -80,7 +80,7 @@
#### How do I reduce the size of my converted TensorFlow Lite model?
-[Post-training quantization](../performance/post_training_quantization.md) can
+[Post-training quantization](../performance/post_training_quantization) can
be used during conversion to TensorFlow Lite to reduce the size of the model.
Post-training quantization quantizes weights to 8-bits of precision from
floating-point and dequantizes them during runtime to perform floating point
@@ -92,7 +92,7 @@
convolutional neural network architectures.
For a deeper understanding of different optimization methods, look at
-[Model optimization](../performance/model_optimization.md).
+[Model optimization](../performance/model_optimization).
#### How do I optimize TensorFlow Lite performance for my machine learning task?
@@ -100,7 +100,8 @@
like this:
* *Make sure that you have the right model for the task.* For image
- classification, check out our [list of hosted models](hosted_models.md).
+ classification, check out the
+ [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification).
* *Tweak the number of threads.* Many TensorFlow Lite operators support
multi-threaded kernels. You can use `SetNumThreads()` in the
[C++ API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L345)
@@ -108,13 +109,13 @@
depending on the environment.
* *Use Hardware Accelerators.* TensorFlow Lite supports model acceleration for
specific hardware using delegates. See our
- [Delegates](../performance/delegates.md) guide for information on what
+ [Delegates](../performance/delegates) guide for information on what
accelerators are supported and how to use them with your model on-device.
* *(Advanced) Profile Model.* The Tensorflow Lite
[benchmarking tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark)
has a built-in profiler that can show per-operator statistics. If you know
how you can optimize an operator’s performance for your specific platform,
- you can implement a [custom operator](ops_custom.md).
+ you can implement a [custom operator](ops_custom).
For a more in-depth discussion on how to optimize performance, take a look at
-[Best Practices](../performance/best_practices.md).
+[Best Practices](../performance/best_practices).
diff --git a/tensorflow/lite/g3doc/guide/hosted_models.md b/tensorflow/lite/g3doc/guide/hosted_models.md
deleted file mode 100644
index 32887a5..0000000
--- a/tensorflow/lite/g3doc/guide/hosted_models.md
+++ /dev/null
@@ -1,179 +0,0 @@
-# Hosted models
-
-The following is an incomplete list of pre-trained models optimized to work with
-TensorFlow Lite.
-
-To get started choosing a model, visit <a href="../models">Models</a> page with
-end-to-end examples, or pick a
-[TensorFlow Lite model from TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite).
-
-Note: The best model for a given application depends on your requirements. For
-example, some applications might benefit from higher accuracy, while others
-require a small model size. You should test your application with a variety of
-models to find the optimal balance between size, performance, and accuracy.
-
-## Image classification
-
-For more information about image classification, see
-<a href="../models/image_classification/overview.md">Image classification</a>.
-Explore the TensorFlow Lite Task Library for instructions about
-[how to integrate image classification models](../inference_with_metadata/task_library/image_classifier)
-in just a few lines of code.
-
-### Quantized models
-
-<a href="../performance/post_training_quantization">Quantized</a> image
-classification models offer the smallest model size and fastest performance, at
-the expense of accuracy. The performance values are measured on Pixel 3 on
-Android 10.
-
-You can find many
-[quantized models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification&q=quantized)
-from TensorFlow Hub and get more model information there.
-
-Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | NNAPI
---------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | ---------: | -------------: | -------------: | -------------: | ----:
-Mobilenet_V1_0.25_128_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_128_quant.tgz) | 0.5 Mb | 39.5% | 64.4% | 0.8 ms | 2 ms
-Mobilenet_V1_0.25_160_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_160_quant.tgz) | 0.5 Mb | 42.8% | 68.1% | 1.3 ms | 2.4 ms
-Mobilenet_V1_0.25_192_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_192_quant.tgz) | 0.5 Mb | 45.7% | 70.8% | 1.8 ms | 2.6 ms
-Mobilenet_V1_0.25_224_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_224_quant.tgz) | 0.5 Mb | 48.2% | 72.8% | 2.3 ms | 2.9 ms
-Mobilenet_V1_0.50_128_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_128_quant.tgz) | 1.4 Mb | 54.9% | 78.1% | 1.7 ms | 2.6 ms
-Mobilenet_V1_0.50_160_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_160_quant.tgz) | 1.4 Mb | 57.2% | 80.5% | 2.6 ms | 2.9 ms
-Mobilenet_V1_0.50_192_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_192_quant.tgz) | 1.4 Mb | 59.9% | 82.1% | 3.6 ms | 3.3 ms
-Mobilenet_V1_0.50_224_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_224_quant.tgz) | 1.4 Mb | 61.2% | 83.2% | 4.7 ms | 3.6 ms
-Mobilenet_V1_0.75_128_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_128_quant.tgz) | 2.6 Mb | 55.9% | 79.1% | 3.1 ms | 3.2 ms
-Mobilenet_V1_0.75_160_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_160_quant.tgz) | 2.6 Mb | 62.4% | 83.7% | 4.7 ms | 3.8 ms
-Mobilenet_V1_0.75_192_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_192_quant.tgz) | 2.6 Mb | 66.1% | 86.2% | 6.4 ms | 4.2 ms
-Mobilenet_V1_0.75_224_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_224_quant.tgz) | 2.6 Mb | 66.9% | 86.9% | 8.5 ms | 4.8 ms
-Mobilenet_V1_1.0_128_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_128_quant.tgz) | 4.3 Mb | 63.3% | 84.1% | 4.8 ms | 3.8 ms
-Mobilenet_V1_1.0_160_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_160_quant.tgz) | 4.3 Mb | 66.9% | 86.7% | 7.3 ms | 4.6 ms
-Mobilenet_V1_1.0_192_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_192_quant.tgz) | 4.3 Mb | 69.1% | 88.1% | 9.9 ms | 5.2 ms
-Mobilenet_V1_1.0_224_quant | [paper](https://arxiv.org/pdf/1712.05877.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz) | 4.3 Mb | 70.0% | 89.0% | 13 ms | 6.0 ms
-Mobilenet_V2_1.0_224_quant | [paper](https://arxiv.org/abs/1806.08342), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz) | 3.4 Mb | 70.8% | 89.9% | 12 ms | 6.9 ms
-Inception_V1_quant | [paper](https://arxiv.org/abs/1409.4842), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_224_quant_20181026.tgz) | 6.4 Mb | 70.1% | 89.8% | 39 ms | 36 ms
-Inception_V2_quant | [paper](https://arxiv.org/abs/1512.00567), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/inception_v2_224_quant_20181026.tgz) | 11 Mb | 73.5% | 91.4% | 59 ms | 18 ms
-Inception_V3_quant | [paper](https://arxiv.org/abs/1806.08342),[tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz) | 23 Mb | 77.5% | 93.7% | 148 ms | 74 ms
-Inception_V4_quant | [paper](https://arxiv.org/abs/1602.07261), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/inception_v4_299_quant_20181026.tgz) | 41 Mb | 79.5% | 93.9% | 268 ms | 155 ms
-
-Note: The model files include both TF Lite FlatBuffer and Tensorflow frozen
-Graph.
-
-Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Accuracy
-numbers were computed using the
-[TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification).
-
-### Floating point models
-
-Floating point models offer the best accuracy, at the expense of model size and
-performance. <a href="../performance/gpu">GPU acceleration</a> requires the use
-of floating point models. The performance values are measured on Pixel 3 on
-Android 10.
-
-You can find many
-[image classification models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification)
-from TensorFlow Hub and get more model information there.
-
-Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI
---------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | ---------: | -------------: | -------------: | -------------: | -----: | ----:
-DenseNet | [paper](https://arxiv.org/abs/1608.06993), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/densenet_2018_04_27.tgz) | 43.6 Mb | 64.2% | 85.6% | 195 ms | 60 ms | 1656 ms
-SqueezeNet | [paper](https://arxiv.org/abs/1602.07360), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz) | 5.0 Mb | 49.0% | 72.9% | 36 ms | 9.5 ms | 18.5 ms
-NASNet mobile | [paper](https://arxiv.org/abs/1707.07012), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/nasnet_mobile_2018_04_27.tgz) | 21.4 Mb | 73.9% | 91.5% | 56 ms | --- | 102 ms
-NASNet large | [paper](https://arxiv.org/abs/1707.07012), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/nasnet_large_2018_04_27.tgz) | 355.3 Mb | 82.6% | 96.1% | 1170 ms | --- | 648 ms
-ResNet_V2_101 | [paper](https://arxiv.org/abs/1603.05027), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/resnet_v2_101.tgz) | 178.3 Mb | 76.8% | 93.6% | 526 ms | 92 ms | 1572 ms
-Inception_V3 | [paper](http://arxiv.org/abs/1512.00567), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | 95.3 Mb | 77.9% | 93.8% | 249 ms | 56 ms | 148 ms
-Inception_V4 | [paper](http://arxiv.org/abs/1602.07261), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz) | 170.7 Mb | 80.1% | 95.1% | 486 ms | 93 ms | 291 ms
-Inception_ResNet_V2 | [paper](https://arxiv.org/abs/1602.07261), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz) | 121.0 Mb | 77.5% | 94.0% | 422 ms | 100 ms | 201 ms
-Mobilenet_V1_0.25_128 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_128.tgz) | 1.9 Mb | 41.4% | 66.2% | 1.2 ms | 1.6 ms | 3 ms
-Mobilenet_V1_0.25_160 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_160.tgz) | 1.9 Mb | 45.4% | 70.2% | 1.7 ms | 1.7 ms | 3.2 ms
-Mobilenet_V1_0.25_192 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_192.tgz) | 1.9 Mb | 47.1% | 72.0% | 2.4 ms | 1.8 ms | 3.0 ms
-Mobilenet_V1_0.25_224 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_224.tgz) | 1.9 Mb | 49.7% | 74.1% | 3.3 ms | 1.8 ms | 3.6 ms
-Mobilenet_V1_0.50_128 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.5_128.tgz) | 5.3 Mb | 56.2% | 79.3% | 3.0 ms | 1.7 ms | 3.2 ms
-Mobilenet_V1_0.50_160 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.5_160.tgz) | 5.3 Mb | 59.0% | 81.8% | 4.4 ms | 2.0 ms | 4.0 ms
-Mobilenet_V1_0.50_192 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.5_192.tgz) | 5.3 Mb | 61.7% | 83.5% | 6.0 ms | 2.5 ms | 4.8 ms
-Mobilenet_V1_0.50_224 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.5_224.tgz) | 5.3 Mb | 63.2% | 84.9% | 7.9 ms | 2.8 ms | 6.1 ms
-Mobilenet_V1_0.75_128 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.75_128.tgz) | 10.3 Mb | 62.0% | 83.8% | 5.5 ms | 2.6 ms | 5.1 ms
-Mobilenet_V1_0.75_160 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.75_160.tgz) | 10.3 Mb | 65.2% | 85.9% | 8.2 ms | 3.1 ms | 6.3 ms
-Mobilenet_V1_0.75_192 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.75_192.tgz) | 10.3 Mb | 67.1% | 87.2% | 11.0 ms | 4.5 ms | 7.2 ms
-Mobilenet_V1_0.75_224 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.75_224.tgz) | 10.3 Mb | 68.3% | 88.1% | 14.6 ms | 4.9 ms | 9.9 ms
-Mobilenet_V1_1.0_128 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_128.tgz) | 16.9 Mb | 65.2% | 85.7% | 9.0 ms | 4.4 ms | 6.3 ms
-Mobilenet_V1_1.0_160 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_160.tgz) | 16.9 Mb | 68.0% | 87.7% | 13.4 ms | 5.0 ms | 8.4 ms
-Mobilenet_V1_1.0_192 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_192.tgz) | 16.9 Mb | 69.9% | 89.1% | 18.1 ms | 6.3 ms | 10.6 ms
-Mobilenet_V1_1.0_224 | [paper](https://arxiv.org/pdf/1704.04861.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | 16.9 Mb | 71.0% | 89.9% | 24.0 ms | 6.5 ms | 13.8 ms
-Mobilenet_V2_1.0_224 | [paper](https://arxiv.org/pdf/1801.04381.pdf), [tflite&pb](https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224.tgz) | 14.0 Mb | 71.8% | 90.6% | 17.5 ms | 6.2 ms | 11.23 ms
-
-### AutoML mobile models
-
-The following image classification models were created using
-<a href="https://cloud.google.com/automl/">Cloud AutoML</a>. The performance
-values are measured on Pixel 3 on Android 10.
-
-You can find these models in
-[TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&q=MnasNet) and get
-more model information there.
-
-Model Name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI
----------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------: | ---------: | -------------: | -------------: | -------------: | ------: | ----:
-MnasNet_0.50_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_0.5_224_09_07_2018.tgz) | 8.5 Mb | 68.03% | 87.79% | 9.5 ms | 5.9 ms | 16.6 ms
-MnasNet_0.75_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_0.75_224_09_07_2018.tgz) | 12 Mb | 71.72% | 90.17% | 13.7 ms | 7.1 ms | 16.7 ms
-MnasNet_1.0_96 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_96_09_07_2018.tgz) | 17 Mb | 62.33% | 83.98% | 5.6 ms | 5.4 ms | 12.1 ms
-MnasNet_1.0_128 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_128_09_07_2018.tgz) | 17 Mb | 67.32% | 87.70% | 7.5 ms | 5.8 ms | 12.9 ms
-MnasNet_1.0_160 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_160_09_07_2018.tgz) | 17 Mb | 70.63% | 89.58% | 11.1 ms | 6.7 ms | 14.2 ms
-MnasNet_1.0_192 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_192_09_07_2018.tgz) | 17 Mb | 72.56% | 90.76% | 14.5 ms | 7.7 ms | 16.6 ms
-MnasNet_1.0_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_224_09_07_2018.tgz) | 17 Mb | 74.08% | 91.75% | 19.4 ms | 8.7 ms | 19 ms
-MnasNet_1.3_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.3_224_09_07_2018.tgz) | 24 Mb | 75.24% | 92.55% | 27.9 ms | 10.6 ms | 22.0 ms
-
-Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Accuracy
-numbers were computed using the
-[TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification).
-
-## Object detection
-
-For more information about object detection, see
-<a href="../models/object_detection/overview.md">Object detection</a>. Explore
-the TensorFlow Lite Task Library for instructions about
-[how to integrate object detection models](../inference_with_metadata/task_library/object_detector)
-in just a few lines of code.
-
-Please find
-[object detection models](https://tfhub.dev/s?deployment-format=lite&module-type=image-object-detection)
-from TensorFlow Hub.
-
-## Pose estimation
-
-For more information about pose estimation, see
-<a href="../models/pose_estimation/overview.md">Pose estimation</a>.
-
-Please find
-[pose estimation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-pose-detection)
-from TensorFlow Hub.
-
-## Image segmentation
-
-For more information about image segmentation, see
-<a href="../models/segmentation/overview.md">Segmentation</a>. Explore the
-TensorFlow Lite Task Library for instructions about
-[how to integrate image segmentation models](../inference_with_metadata/task_library/image_segmenter)
-in just a few lines of code.
-
-Please find
-[image segmentation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-segmentation)
-from TensorFlow Hub.
-
-## Question and Answer
-
-For more information about question and answer with MobileBERT, see
-<a href="../models/bert_qa/overview.md">Question And Answer</a>. Explore the
-TensorFlow Lite Task Library for instructions about
-[how to integrate question and answer models](../inference_with_metadata/task_library/bert_question_answerer)
-in just a few lines of code.
-
-Please find [Mobile BERT model](https://tfhub.dev/tensorflow/mobilebert/1) from
-TensorFlow Hub.
-
-## Smart reply
-
-For more information about smart reply, see
-<a href="../models/smart_reply/overview.md">Smart reply</a>.
-
-Please find [Smart Reply model](https://tfhub.dev/tensorflow/smartreply/1) from
-TensorFlow Hub.
diff --git a/tensorflow/lite/g3doc/guide/ios.md b/tensorflow/lite/g3doc/guide/ios.md
index 2b82fbe..6a82e43 100644
--- a/tensorflow/lite/g3doc/guide/ios.md
+++ b/tensorflow/lite/g3doc/guide/ios.md
@@ -10,7 +10,7 @@
[TensorFlow Lite iOS image classification](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/ios/EXPLORE_THE_CODE.md).
This example app uses
-[image classification](https://www.tensorflow.org/lite/models/image_classification/overview)
+[image classification](https://www.tensorflow.org/lite/examples/image_classification/overview)
to continuously classify whatever it sees from the device's rear-facing camera,
displaying the top most probable classifications. It allows the user to choose
between a floating point or
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/bert_question_answerer.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/bert_question_answerer.md
index c883c3e..5e30bf9 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/bert_question_answerer.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/bert_question_answerer.md
@@ -3,7 +3,7 @@
The Task Library `BertQuestionAnswerer` API loads a Bert model and answers
questions based on the content of a given passage. For more information, see the
documentation for the Question-Answer model
-<a href="../../models/bert_qa/overview.md">here</a>.
+<a href="../../examples/bert_qa/overview">here</a>.
## Key features of the BertQuestionAnswerer API
@@ -159,7 +159,7 @@
## Model compatibility requirements
The `BertQuestionAnswerer` API expects a TFLite model with mandatory
-[TFLite Model Metadata](../../convert/metadata.md).
+[TFLite Model Metadata](../../convert/metadata).
The Metadata should meet the following requirements:
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_classifier.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_classifier.md
index 884e656..52543c7 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_classifier.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_classifier.md
@@ -7,7 +7,7 @@
classes of images. For example, a model might be trained to recognize photos
representing three different types of animals: rabbits, hamsters, and dogs. See
the
-[introduction of image classification](../../models/image_classification/overview.md)
+[image classification overview](../../examples/image_classification/overview)
for more information about image classifiers.
Use the Task Library `ImageClassifier` API to deploy your custom image
@@ -37,9 +37,6 @@
[TensorFlow Lite Model Maker for Image Classification](https://www.tensorflow.org/lite/tutorials/model_maker_image_classification).
* The
- [pretrained image classification models from TensorFlow Lite Hosted Models](https://www.tensorflow.org/lite/guide/hosted_models#image_classification).
-
-* The
[pretrained image classification models on TensorFlow Hub](https://tfhub.dev/tensorflow/collections/lite/task-library/image-classifier/1).
* Models created by
@@ -151,7 +148,7 @@
## Model compatibility requirements
The `ImageClassifier` API expects a TFLite model with mandatory
-[TFLite Model Metadata](../../convert/metadata.md). See examples of creating
+[TFLite Model Metadata](../../convert/metadata). See examples of creating
metadata for image classifiers using the
[TensorFlow Lite Metadata Writer API](../../convert/metadata_writer_tutorial.ipynb#image_classifiers).
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_segmenter.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_segmenter.md
index 621c7e3..1a099e5 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_segmenter.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_segmenter.md
@@ -2,12 +2,12 @@
Image segmenters predict whether each pixel of an image is associated with a
certain class. This is in contrast to
-<a href="../../models/object_detection/overview.md">object detection</a>, which
-detects objects in rectangular regions, and
-<a href="../../models/image_classification/overview.md">image
+<a href="../../examples/object_detection/overview">object detection</a>,
+which detects objects in rectangular regions, and
+<a href="../../examples/image_classification/overview">image
classification</a>, which classifies the overall image. See the
-[introduction of image segmentation](../../models/segmentation/overview.md) for
-more information about image segmenters.
+[image segmentation overview](../../examples/segmentation/overview) for more
+information about image segmenters.
Use the Task Library `ImageSegmenter` API to deploy your custom image segmenters
or pretrained ones into your mobile apps.
@@ -147,7 +147,7 @@
## Model compatibility requirements
The `ImageSegmenter` API expects a TFLite model with mandatory
-[TFLite Model Metadata](../../convert/metadata.md). See examples of creating
+[TFLite Model Metadata](../../convert/metadata). See examples of creating
metadata for image segmenters using the
[TensorFlow Lite Metadata Writer API](../../convert/metadata_writer_tutorial.ipynb#image_segmenters).
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/nl_classifier.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/nl_classifier.md
index e13dae1..663eab7 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/nl_classifier.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/nl_classifier.md
@@ -18,7 +18,7 @@
The following models are guaranteed to be compatible with the `NLClassifier`
API.
-* The <a href="../../models/text_classification/overview.md">movie review
+* The <a href="../../examples/text_classification/overview">movie review
sentiment classification</a> model.
* Models with `average_word_vec` spec created by
@@ -136,7 +136,7 @@
## Example results
Here is an example of the classification results of the
-[movie review model](https://www.tensorflow.org/lite/models/text_classification/overview).
+[movie review model](https://www.tensorflow.org/lite/examples/text_classification/overview).
Input: "What a waste of my time."
@@ -154,7 +154,7 @@
## Model compatibility requirements
Depending on the use case, the `NLClassifier` API can load a TFLite model with
-or without [TFLite Model Metadata](../../convert/metadata.md). See examples of
+or without [TFLite Model Metadata](../../convert/metadata). See examples of
creating metadata for natural language classifiers using the
[TensorFlow Lite Metadata Writer API](../../convert/metadata_writer_tutorial.ipynb#nl_classifiers).
@@ -165,7 +165,7 @@
- Input of the model should be either a kTfLiteString tensor raw input
string or a kTfLiteInt32 tensor for regex tokenized indices of raw input
string.
- - If input type is kTfLiteString, no [Metadata](../../convert/metadata.md)
+ - If input type is kTfLiteString, no [Metadata](../../convert/metadata)
is required for the model.
- If input type is kTfLiteInt32, a `RegexTokenizer` needs to be set up in
the input tensor's
@@ -180,7 +180,7 @@
corresponding platforms
- Can have an optional associated file in the output tensor's
- corresponding [Metadata](../../convert/metadata.md) for category labels,
+ corresponding [Metadata](../../convert/metadata) for category labels,
the file should be a plain text file with one label per line, and the
number of labels should match the number of categories as the model
outputs. See the
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/object_detector.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/object_detector.md
index 772670b..d86871f 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/object_detector.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/object_detector.md
@@ -4,10 +4,10 @@
and provide information about their positions within the given image or a video
stream. An object detector is trained to detect the presence and location of
multiple classes of objects. For example, a model might be trained with images
-that contain various pieces of fruit, along with a _label_ that specifies the
+that contain various pieces of fruit, along with a *label* that specifies the
class of fruit they represent (e.g. an apple, a banana, or a strawberry), and
data specifying where each object appears in the image. See the
-[introduction of object detection](../../models/object_detection/overview.md)
+[introduction of object detection](../../examples/object_detection/overview)
for more information about object detectors.
Use the Task Library `ObjectDetector` API to deploy your custom object detectors
@@ -152,7 +152,7 @@
## Model compatibility requirements
The `ObjectDetector` API expects a TFLite model with mandatory
-[TFLite Model Metadata](../../convert/metadata.md). See examples of creating
+[TFLite Model Metadata](../../convert/metadata). See examples of creating
metadata for object detectors using the
[TensorFlow Lite Metadata Writer API](../../convert/metadata_writer_tutorial.ipynb#object_detectors).
diff --git a/tensorflow/lite/g3doc/performance/best_practices.md b/tensorflow/lite/g3doc/performance/best_practices.md
index ae5ffa1..22e6942 100644
--- a/tensorflow/lite/g3doc/performance/best_practices.md
+++ b/tensorflow/lite/g3doc/performance/best_practices.md
@@ -20,13 +20,13 @@
One example of models optimized for mobile devices are
[MobileNets](https://arxiv.org/abs/1704.04861), which are optimized for mobile
-vision applications. [Hosted models](../guide/hosted_models.md) lists several
-other models that have been optimized specifically for mobile and embedded
-devices.
+vision applications.
+[TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite) lists several other
+models that have been optimized specifically for mobile and embedded devices.
You can retrain the listed models on your own dataset by using transfer
-learning. Check out our transfer learning tutorial for
-[image classification](/lite/tutorials/model_maker_image_classification) and
+learning. Check out the transfer learning tutorial for
+[image classification](../tutorials/model_maker_image_classification.ipynb) and
[object detection](https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193).
## Profile your model
@@ -39,7 +39,7 @@
computation time.
You can also use
-[TensorFlow Lite tracing](measurement.md#trace_tensorflow_lite_internals_in_android)
+[TensorFlow Lite tracing](measurement#trace_tensorflow_lite_internals_in_android)
to profile the model in your Android application, using standard Android system
tracing, and to visualize the operator invocations by time with GUI based
profiling tools.
@@ -51,8 +51,8 @@
look into optimizing that operator. This scenario should be rare as TensorFlow
Lite has optimized versions for most operators. However, you may be able to
write a faster version of a custom op if you know the constraints in which the
-operator is executed. Check out our
-[custom operator documentation](../custom_operators.md).
+operator is executed. Check out the
+[custom operators guide](../guide/ops_custom).
## Optimize your model
@@ -60,7 +60,7 @@
more energy efficient, so that they can be deployed on mobile devices.
TensorFlow Lite supports multiple optimization techniques, such as quantization.
-Check out our [model optimization docs](model_optimization.md) for details.
+Check out the [model optimization docs](model_optimization) for details.
## Tweak the number of threads
@@ -101,27 +101,29 @@
TensorFlow Lite has added new ways to accelerate models with faster hardware
like GPUs, DSPs, and neural accelerators. Typically, these accelerators are
-exposed through [delegate](delegates.md) submodules that take over parts of the
+exposed through [delegate](delegates) submodules that take over parts of the
interpreter execution. TensorFlow Lite can use delegates by:
* Using Android's
[Neural Networks API](https://developer.android.com/ndk/guides/neuralnetworks/).
You can utilize these hardware accelerator backends to improve the speed and
- efficiency of your model. To enable the Neural Networks API, check out
- the [NNAPI delegate](nnapi.md) guide.
+ efficiency of your model. To enable the Neural Networks API, check out the
+ [NNAPI delegate](https://www.tensorflow.org/lite/android/delegates/nnapi)
+ guide.
* GPU delegate is available on Android and iOS, using OpenGL/OpenCL and Metal,
- respectively. To try them out, see the [GPU delegate tutorial](gpu.md) and
- [documentation](gpu_advanced.md).
+ respectively. To try them out, see the [GPU delegate tutorial](gpu) and
+ [documentation](gpu_advanced).
* Hexagon delegate is available on Android. It leverages the Qualcomm Hexagon
DSP if it is available on the device. See the
- [Hexagon delegate tutorial](hexagon_delegate.md) for more information.
+ [Hexagon delegate tutorial](https://www.tensorflow.org/lite/android/delegates/hexagon)
+ for more information.
* It is possible to create your own delegate if you have access to
- non-standard hardware. See [TensorFlow Lite delegates](delegates.md) for
- more information.
+ non-standard hardware. See [TensorFlow Lite delegates](delegates) for more
+ information.
Be aware that some accelerators work better for different types of models. Some
delegates only support float models or models optimized in a specific way. It is
-important to [benchmark](measurement.md) each delegate to see if it is a good
+important to [benchmark](measurement) each delegate to see if it is a good
choice for your application. For example, if you have a very small model, it may
not be worth delegating the model to either the NN API or the GPU. Conversely,
accelerators are a great choice for large models that have high arithmetic
diff --git a/tensorflow/lite/g3doc/performance/post_training_quantization.md b/tensorflow/lite/g3doc/performance/post_training_quantization.md
index abbdb2a..a2e93ef 100644
--- a/tensorflow/lite/g3doc/performance/post_training_quantization.md
+++ b/tensorflow/lite/g3doc/performance/post_training_quantization.md
@@ -252,13 +252,12 @@
Since weights are quantized post training, there could be an accuracy loss,
particularly for smaller networks. Pre-trained fully quantized models are
-provided for specific networks in the
-[TensorFlow Lite model repository](../models/). It is important to check the
-accuracy of the quantized model to verify that any degradation in accuracy is
-within acceptable limits. There are tools to evaluate
+provided for specific networks on
+[TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&q=quantized){:.external}.
+It is important to check the accuracy of the quantized model to verify that any
+degradation in accuracy is within acceptable limits. There are tools to evaluate
[TensorFlow Lite model accuracy](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks){:.external}.
-
Alternatively, if the accuracy drop is too high, consider using
[quantization aware training](https://www.tensorflow.org/model_optimization/guide/quantization/training)
. However, doing so requires modifications during model training to add fake
@@ -281,6 +280,6 @@
the range [-128, 127], with a zero-point in range [-128, 127].
For a detailed view of our quantization scheme, please see our
-[quantization spec](./quantization_spec.md). Hardware vendors who want to plug
+[quantization spec](./quantization_spec). Hardware vendors who want to plug
into TensorFlow Lite's delegate interface are encouraged to implement the
quantization scheme described there.
diff --git a/tensorflow/lite/g3doc/tutorials/_index.yaml b/tensorflow/lite/g3doc/tutorials/_index.yaml
deleted file mode 100644
index 9c3cb46..0000000
--- a/tensorflow/lite/g3doc/tutorials/_index.yaml
+++ /dev/null
@@ -1,189 +0,0 @@
-book_path: /lite/_book.yaml
-project_path: /lite/_project.yaml
-title: Tutorials
-landing_page:
- custom_css_path: /site-assets/css/style.css
- nav: left
- meta_tags:
- - name: description
- content: >
- TensorFlow Lite tutorials to help you get started with machine learning on Android, iOS,
- Raspberry Pi and IoT devices.
-
- rows:
- # Pre-trained models
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h2 class="tfo-landing-page-heading no-link">Getting Started</h2>
- TensorFlow Lite is an open-source deep learning framework to run TensorFlow models
- on-device. If you are new to TensorFlow Lite, we recommend that you first explore the
- <a href="/lite/models">pre-trained models</a> and run the example
- apps below on a real device to see what TensorFlow Lite can do.
-
- - classname: devsite-landing-row-100
- items:
- - classname: tfo-landing-page-card
- description: >
- <a href="/lite/examples/object_detection/overview">
- <h3 class="no-link">Object Detection</h3>
- </a>
- Detect objects in real time from a camera feed with a MobileNet model.
- path: /lite/examples/object_detection/overview
- - classname: tfo-landing-page-card
- description: >
- <a href="/lite/examples/audio_classification/overview">
- <h3 class="no-link">Audio Classification</h3>
- </a>
- Identify what an audio represents, e.g. clapping or typing.
- path: /lite/examples/audio_classification/overview
-
- # Mobile developers
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h3 class="tfo-landing-page-heading no-link">For mobile developers</h3>
- If you are a mobile developer without much experience with machine learning and
- TensorFlow, you can start by learning how to train a model and deploy to a
- mobile app with TensorFlow Lite Model Maker.
-
- - classname: devsite-landing-row-100
- items:
- - classname: tfo-landing-page-card
- description: >
- <a href="https://codelabs.developers.google.com/codelabs/recognize-flowers-with-tensorflow-on-android/#0">
- <h3 class="no-link">Recognize flowers on Android</h3>
- </a>
- A quick start tutorial for Android. Train a flower classification model and deploy it to an
- Android application.
- path: https://codelabs.developers.google.com/codelabs/recognize-flowers-with-tensorflow-on-android/#0
- - classname: tfo-landing-page-card
- description: >
- <a href="https://codelabs.developers.google.com/codelabs/recognize-flowers-with-tensorflow-on-ios/#0">
- <h3 class="no-link">Recognize flowers on iOS</h3>
- </a>
- A quick start tutorial for iOS. Train a flower classification model and deploy it to an iOS
- application.
- path: https://codelabs.developers.google.com/codelabs/recognize-flowers-with-tensorflow-on-ios/#0
-
- # Model creators
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h3 class="tfo-landing-page-heading no-link">For model creators</h3>
- If you are already familiar with TensorFlow and interested in deploying to edge devices,
- then you can start with the below tutorial to learn how to convert a TensorFlow model to
- TensorFlow Lite format and optimize it for on-device inference.
-
- - classname: devsite-landing-row-100
- items:
- - classname: tfo-landing-page-card
- description: >
- <a href="https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/#0">
- <h3 class="no-link">Recognize handwritten digits</h3>
- </a>
- A quick start end-to-end tutorial on converting and optimizing a TensorFlow model for
- on-device inference, then deploy it to an Android app.
- path: https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/#0
- - classname: tfo-landing-page-card
- description: >
- <a href="/lite/tutorials/model_maker_image_classification">
- <h3 class="no-link">Transfer learning for image classification</h3>
- </a>
- Learn how to use TensorFlow Lite Model Maker to quickly create image classification models.
- path: /lite/tutorials/model_maker_image_classification
-
- # IoT developers
- ## Linux-based IoT devices
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h3 class="tfo-landing-page-heading no-link">For IoT developers</h3>
- If you are interested in deploying a TensorFlow model to Linux-based IoT devices such as
- Raspberry Pi, then you can try out these tutorials on how to implement computer vision tasks
- on IoT devices.
-
- - classname: devsite-landing-row-100
- items:
- - classname: tfo-landing-page-card
- description: >
- <a href="https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/raspberry_pi/">
- <h3 class="no-link">Image classification on Raspberry Pi</h3>
- </a>
- Perform real-time image classification using images streamed from the Pi Camera.
- path: https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/raspberry_pi/
- - classname: tfo-landing-page-card
- description: >
- <a href="https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/raspberry_pi/">
- <h3 class="no-link">Object Detection on Raspberry Pi</h3>
- </a>
- Perform real-time object detection using images streamed from the Pi Camera.
- path: https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/raspberry_pi/
- ## Microcontrollers
- - classname: devsite-landing-row-100
- items:
- - description: >
- If you are interested in deploying a TensorFlow model to microcontrollers which are much
- more resource constrained, then you can start with these tutorials that demonstrate an
- end-to-end workflow from developing a TensorFlow model to converting to a TensorFlow Lite
- format and deploying to a microcontroller with TensorFlow Lite Micro.
-
- - classname: devsite-landing-row-100
- items:
- - classname: tfo-landing-page-card
- description: >
- <a href="https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow/#0">
- <h3 class="no-link">Hotword detection</h3>
- </a>
- Train a tiny speech model that can detect simple hotwords.
- path: https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow/#0
- - classname: tfo-landing-page-card
- description: >
- <a href="https://blog.tensorflow.org/2019/11/how-to-get-started-with-machine.html">
- <h3 class="no-link">Gesture recognition</h3>
- </a>
- Train a model that can recognize different gestures using accelerometer data.
- path: https://blog.tensorflow.org/2019/11/how-to-get-started-with-machine.html
-
-
- # Next steps
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h2 class="tfo-landing-page-heading no-link">Next steps</h2>
- <p>After you have familiarized yourself with the workflow of training a TensorFlow model,
- converting it to a TensorFlow Lite format, and deploying it to mobile apps, you can learn
- more about TensorFlow Lite with the below materials:</p>
- <ul>
- <li>
- Try out the different domain tutorials (e.g. vision, speech) from the left navigation
- bar. They show you how to train a model for a specific machine learning task, such as
- <a href="/lite/tutorials/model_maker_object_detection">object detection</a>
- or
- <a href="/lite/tutorials/model_maker_text_classification">sentiment analysis</a>.
- </li>
- <li>
- Learn more about the development workflow in the TensorFlow Lite
- <a href="https://www.tensorflow.org/lite/guide">Guide</a>.
- You can find in-depth information about TensorFlow Lite features, such as
- <a href="https://www.tensorflow.org/lite/convert">model conversion</a>
- or
- <a href="https://www.tensorflow.org/lite/performance/model_optimization">model optimization</a>.
- </li>
- <li>
- Check out this free
- <a href="https://www.udacity.com/course/intro-to-tensorflow-lite--ud190">e-learning course</a>
- on TensorFlow Lite.
- </li>
- </ul>
-
- # Blogs and videos
- - classname: devsite-landing-row-100
- items:
- - description: >
- <h2 class="tfo-landing-page-heading no-link">Blogs and videos</h2>
- <p>Subscribe to the
- <a href="https://blog.tensorflow.org/search?label=TensorFlow+Lite&max-results=20">TensorFlow blog</a>,
- <a href="https://www.youtube.com/tensorflow">YouTube channel</a>,
- and <a href="https://twitter.com/tensorflow">Twitter</a> for the latest updates.
- </p>
diff --git a/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb b/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb
index f82b7f7..c07cd0e 100644
--- a/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb
+++ b/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb
@@ -577,7 +577,7 @@
"id": "ROS2Ay2jMPCl"
},
"source": [
- "See [example applications and guides of image classification](https://www.tensorflow.org/lite/models/image_classification/overview#example_applications_and_guides) for more details about how to integrate the TensorFlow Lite model into mobile apps.\n",
+ "See [example applications and guides of image classification](https://www.tensorflow.org/lite/examples/image_classification/overview) for more details about how to integrate the TensorFlow Lite model into mobile apps.\n",
"\n",
"This model can be integrated into an Android or an iOS app using the [ImageClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/image_classifier) of the [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview)."
]
diff --git a/tensorflow/lite/tools/evaluation/proto/evaluation_stages.proto b/tensorflow/lite/tools/evaluation/proto/evaluation_stages.proto
index 37f0d57..4c51e58 100644
--- a/tensorflow/lite/tools/evaluation/proto/evaluation_stages.proto
+++ b/tensorflow/lite/tools/evaluation/proto/evaluation_stages.proto
@@ -292,7 +292,7 @@
// Required.
// Model's outputs should be same as a TFLite-compatible SSD model.
// Refer:
- // https://www.tensorflow.org/lite/models/object_detection/overview#output
+ // https://www.tensorflow.org/lite/examples/object_detection/overview#output_signature
optional TfliteInferenceParams inference_params = 1;
// Optional. Used to match ground-truth categories with model output.
// SSD Mobilenet V1 Model trained on COCO assumes class 0 is background class
diff --git a/tensorflow/lite/tools/evaluation/stages/object_detection_stage.h b/tensorflow/lite/tools/evaluation/stages/object_detection_stage.h
index f7826d7..e23259d 100644
--- a/tensorflow/lite/tools/evaluation/stages/object_detection_stage.h
+++ b/tensorflow/lite/tools/evaluation/stages/object_detection_stage.h
@@ -35,7 +35,7 @@
// Assumes that the object detection model's signature (number of
// inputs/outputs, ordering of outputs & what they denote) is same as the
// MobileNet SSD model:
-// https://www.tensorflow.org/lite/models/object_detection/overview#output.
+// https://www.tensorflow.org/lite/examples/object_detection/overview#output_signature.
// Input size/type & number of detections could be different.
//
// This class will be extended to support other types of detection models, if
diff --git a/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/README.md b/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/README.md
index 652d9ae..cddcdaf 100644
--- a/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/README.md
+++ b/tensorflow/lite/tools/evaluation/tasks/coco_object_detection/README.md
@@ -42,7 +42,7 @@
* `model_file` : `string` \
Path to the TFlite model file. It should accept images preprocessed in the
Inception format, and the output signature should be similar to the
- [SSD MobileNet model](https://www.tensorflow.org/lite/models/object_detection/overview#output.):
+ [SSD MobileNet model](https://www.tensorflow.org/lite/examples/object_detection/overview#output_signature.):
* `model_output_labels`: `string` \
Path to labels that correspond to output of model. E.g. in case of