Update TensorFlow Lite for Microcontrollers docs and readmes

PiperOrigin-RevId: 276798720
Change-Id: I4960661621d5df3382bdccf45764500257ba3926
diff --git a/tensorflow/lite/experimental/micro/README.md b/tensorflow/lite/experimental/micro/README.md
index 71a9daf..39c4c02 100644
--- a/tensorflow/lite/experimental/micro/README.md
+++ b/tensorflow/lite/experimental/micro/README.md
@@ -1,596 +1,18 @@
 # TensorFlow Lite for Microcontrollers
 
-This an experimental port of TensorFlow Lite aimed at micro controllers and
-other devices with only kilobytes of memory. It doesn't require any operating
-system support, any standard C or C++ libraries, or dynamic memory allocation,
-so it's designed to be portable even to 'bare metal' systems. The core runtime
-fits in 16KB on a Cortex M3, and with enough operators to run a speech keyword
-detection model, takes up a total of 22KB.
+TensorFlow Lite for Microcontrollers is an experimental port of TensorFlow Lite
+designed to run machine learning models on microcontrollers and other devices
+with only kilobytes of memory.
 
-## Table of Contents
+To learn how to use the framework, visit the developer documentation at
+[tensorflow.org/lite/microcontrollers](https://www.tensorflow.org/lite/microcontrollers).
 
--   [Getting Started](#getting-started)
-    *   [Examples](#examples)
-    *   [Getting Started with Portable Reference Code](#getting-started-with-portable-reference-code)
-    *   [Building Portable Reference Code using Make](#building-portable-reference-code-using-make)
-    *   [Building for the "Blue Pill" STM32F103 using Make](#building-for-the-blue-pill-stm32f103-using-make)
-    *   [Building for "Hifive1" SiFive FE310 development board using Make](#building-for-hifive1-sifive-fe310-development-board)
-    *   [Building for Ambiq Micro Apollo3Blue EVB using Make](#building-for-ambiq-micro-apollo3blue-evb-using-make)
-        *   [Additional Apollo3 Instructions](#additional-apollo3-instructions)
-    *   [Building for the Eta Compute ECM3531 EVB using Make](#Building-for-the-Eta-Compute-ECM3531-EVB-using-Make)
+## Porting to a new platform
 
--   [Goals](#goals)
-
--   [Generating Project Files](#generating-project-files)
-
--   [Generating Arduino Libraries](#generating-arduino-libraries)
-
--   [How to Port TensorFlow Lite Micro to a New Platform](#how-to-port-tensorflow-lite-micro-to-a-new-platform)
-
-    *   [Requirements](#requirements)
-    *   [Getting Started](#getting-started-1)
-    *   [Troubleshooting](#troubleshooting)
-    *   [Optimizing for your Platform](#optimizing-for-your-platform)
-    *   [Code Module Organization](#code-module-organization)
-    *   [Working with Generated Projects](#working-with-generated-projects)
-    *   [Supporting a Platform with Makefiles](#supporting-a-platform-with-makefiles)
-    *   [Supporting a Platform with Emulation Testing](#supporting-a-platform-with-emulation-testing)
-    *   [Implementing More Optimizations](#implementing-more-optimizations)
-
-# Getting Started
-
-## Examples
-
-The fastest way to learn how TensorFlow Lite for Microcontrollers works is by
-exploring and running our examples, which include application code and trained
-TensorFlow models.
-
-The following examples are available:
-
-- [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world)
-  * Uses a very simple model, trained to reproduce a sine wave, to control an
-    LED or animation
-  * Application code for Arduino, SparkFun Edge, and STM32F746
-  * Colab walkthrough of model training and conversion
-
-- [micro_speech](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech)
-  * Uses a 20 KB model to recognize keywords in spoken audio
-  * Application code for Arduino, SparkFun Edge, and STM32F746
-  * Python scripts for model training and conversion
-
-- [person_detection](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/person_detection)
-  * Uses a 250 KB model to recognize presence or absence of a person in images
-    captured by a camera
-  * Application code for SparkFun Edge
-
-- [magic_wand](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/magic_wand)
-  * Uses a 20 KB model to recognize gestures using accelerometer data
-  * Application code for Arduino and SparkFun Edge
-
-## Pre-generated Project Files
-
-One of the challenges of embedded software development is that there are a lot
-of different architectures, devices, operating systems, and build systems. We
-aim to support as many of the popular combinations as we can, and make it as
-easy as possible to add support for others.
-
-If you're a product developer, we have build instructions or pre-generated
-project files that you can download for the following platforms:
-
-Device                                                                                         | Mbed                                                                           | Keil                                                                           | Make/GCC
----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ | --------
-[STM32F746G Discovery Board](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)     | [Download](https://drive.google.com/open?id=1OtgVkytQBrEYIpJPsE8F6GUKHPBS3Xeb) | -                                                                              | [Instructions](#generating-project-files)
-["Blue Pill" STM32F103-compatible development board](https://github.com/google/stm32_bare_lib) | -                                                                              | -                                                                              | [Instructions](#building-for-the-blue-pill-stm32f103-using-make)
-[Ambiq Micro Apollo3Blue EVB using Make](https://ambiqmicro.com/apollo-ultra-low-power-mcus/)  | -                                                                              | -                                                                              | [Instructions](#building-for-ambiq-micro-apollo3blue-evb-using-make)
-[Generic Keil uVision Projects](http://www2.keil.com/mdk5/uvision/)                            | -                                                                              | [Download](https://drive.google.com/open?id=1Lw9rsdquNKObozClLPoE5CTJLuhfh5mV) | -
-[Eta Compute ECM3531 EVB](https://etacompute.com/)                                             | -                                                                              | -                                                                              | [Instructions](#Building-for-the-Eta-Compute-ECM3531-EVB-using-Make)
-
-If your device is not yet supported, it may not be too hard to add support. You
-can learn about that process
-[here](#how-to-port-tensorflow-lite-micro-to-a-new-platform). We're looking
-forward to getting your help expanding this table!
-
-## Getting Started with Portable Reference Code
-
-If you don't have a particular microcontroller platform in mind yet, or just
-want to try out the code before beginning porting, the easiest way to begin is
-by
-[downloading the platform-agnostic reference code](https://drive.google.com/open?id=1cawEQAkqquK_SO4crReDYqf_v7yAwOY8).
-You'll see a series of folders inside the archive, with each one containing just
-the source files you need to build one binary. There is a simple Makefile for
-each folder, but you should be able to load the files into almost any IDE and
-build them. There's also a [Visual Studio Code](https://code.visualstudio.com/) project file already set up, so
-you can easily explore the code in a cross-platform IDE.
-
-## Building Portable Reference Code using Make
-
-It's easy to build portable reference code directly from GitHub using make if
-you're on a Linux or OS X machine with an internet connection.
-
--   Open a terminal
--   Download the TensorFlow source with `git clone
-    https://github.com/tensorflow/tensorflow.git`
--   Enter the source root directory by running `cd tensorflow`
--   Build and test the library with `make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile test`
-
-You should see a series of compilation steps, followed by `~~~ALL TESTS
-PASSED~~~` for the various tests of the code that it will run. If there's an
-error, you should get an informative message from make about what went wrong.
-
-These tests are all built as simple binaries with few dependencies, so you can
-run them manually. For example, here's how to run the depthwise convolution
-test, and its output:
-
-```
-tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/bin/depthwise_conv_test
-
-Testing SimpleTest
-Testing SimpleTestQuantized
-Testing SimpleTestRelu
-Testing SimpleTestReluQuantized
-4/4 tests passed
-~ALL TESTS PASSED~~~
-```
-
-Looking at the
-[depthwise_conv_test.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/kernels/depthwise_conv_test.cc)
-code, you'll see a sequence that looks like this:
-
-```
-...
-TF_LITE_MICRO_TESTS_BEGIN
-
-TF_LITE_MICRO_TEST(SimpleTest) {
-...
-}
-...
-TF_LITE_MICRO_TESTS_END
-```
-
-These macros work a lot like
-[the Google test framework](https://github.com/google/googletest), but they
-don't require any dependencies and just write results to stderr, rather than
-aborting the program. If all the tests pass, then `~~~ALL TESTS PASSED~~~` is
-output, and the test harness that runs the binary during the make process knows
-that everything ran correctly. If there's an error, the lack of the expected
-string lets the harness know that the test failed.
-
-So, why are we running tests in this complicated way? So far, we've been
-building binaries that run locally on the Mac OS or Linux machine you're
-building on, but this approach becomes important when we're targeting simple
-micro controller devices.
-
-## Building for the "Blue Pill" STM32F103 using Make
-
-The goal of this library is to enable machine learning on resource-constrained
-micro controllers and DSPs, and as part of that we've targeted the
-["Blue Pill" STM32F103-compatible development board](https://github.com/google/stm32_bare_lib)
-as a cheap and popular platform. It only has 20KB of RAM and 64KB of flash, so
-it's a good device to ensure we can run efficiently on small chips.
-
-It's fairly easy to
-[buy and wire up a physical board](https://github.com/google/stm32_bare_lib#wiring-up-your-blue-pill),
-but even if you don't have an actual device, the
-[Renode project](https://renode.io/) makes it easy to run a faithful emulation
-on your desktop machine. You'll need [Docker](https://www.docker.com/)
-installed, but once you have that set up, try running the following command:
-
-`make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=bluepill
-test`
-
-You should see a similar set of outputs as you did in the previous section, with
-the addition of some extra Docker logging messages. These are because we're
-using Docker to run the Renode micro controller emulation tool, and the tests
-themselves are being run on a simulated STM32F103 device. The communication
-channels between an embedded device and the host are quite limited, so the test
-harness looks at the output of the debug log to see if tests have passed, just
-as it did in the previous section. This makes it a very flexible way to run
-cross-platform tests, even when a platform has no operating system facilities,
-as long as it can output debugging text logs.
-
-To understand what's happening here, try running the same depthwise convolution
-test, but through the emulated device test harness, with the following command:
-
-```
-tensorflow/lite/experimental/micro/testing/test_bluepill_binary.sh \
-tensorflow/lite/experimental/micro/tools/make/gen/bluepill_cortex-m3/bin/depthwise_conv_test \
-'~~~ALL TESTS PASSED~~~'
-
-```
-
-You should see output that looks something like this:
-
-```
-Sending build context to Docker daemon   21.5kB
-Step 1/2 : FROM antmicro/renode:latest
- ---> 1b670a243e8f
-Step 2/2 : LABEL maintainer="Pete Warden <petewarden@google.com>"
- ---> Using cache
- ---> 3afcd410846d
-Successfully built 3afcd410846d
-Successfully tagged renode_bluepill:latest
-LOGS:
-...
-03:27:32.4340 [INFO] machine-0: Machine started.
-03:27:32.4790 [DEBUG] cpu.uartSemihosting: [+0.22s host +0s virt 0s virt from start] Testing SimpleTest
-03:27:32.4812 [DEBUG] cpu.uartSemihosting: [+2.21ms host +0s virt 0s virt from start]   Testing SimpleTestQuantized
-03:27:32.4833 [DEBUG] cpu.uartSemihosting: [+2.14ms host +0s virt 0s virt from start]   Testing SimpleTestRelu
-03:27:32.4834 [DEBUG] cpu.uartSemihosting: [+0.18ms host +0s virt 0s virt from start]   Testing SimpleTestReluQuantized
-03:27:32.4838 [DEBUG] cpu.uartSemihosting: [+0.4ms host +0s virt 0s virt from start]   4/4 tests passed
-03:27:32.4839 [DEBUG] cpu.uartSemihosting: [+41µs host +0s virt 0s virt from start]   ~~~ALL TESTS PASSED~~~
-03:27:32.4839 [DEBUG] cpu.uartSemihosting: [+5µs host +0s virt 0s virt from start]
-...
-tensorflow/lite/experimental/micro/tools/make/gen/bluepill_cortex-m3/bin/depthwise_conv_test: PASS
-```
-
-There's a lot of output here, but you should be able to see that the same tests
-that were covered when we ran locally on the development machine show up in the
-debug logs here, along with the magic string `~~~ALL TESTS PASSED~~~`. This is
-the exact same code as before, just compiled and run on the STM32F103 rather
-than your desktop. We hope that the simplicity of this testing approach will
-help make adding support for new platforms as easy as possible.
-
-## Building for "Hifive1" SiFive FE310 development board
-
-We've targeted the
-["HiFive1" Arduino-compatible development board](https://www.sifive.com/boards/hifive1)
-as a test platform for RISC-V MCU.
-
-Similar to Blue Pill setup, you will need Docker installed. The binary can be
-executed on either HiFive1 board or emulated using
-[Renode project](https://renode.io/) on your desktop machine.
-
-The following instructions builds and transfers the source files to the Docker
-`docker build -t riscv_build \ -f
-{PATH_TO_TENSORFLOW_ROOT_DIR}/tensorflow/lite/experimental/micro/testing/Dockerfile.riscv
-\ {PATH_TO_TENSORFLOW_ROOT_DIR}/tensorflow/lite/experimental/micro/testing/`
-
-You should see output that looks something like this:
-
-```
-Sending build context to Docker daemon  28.16kB
-Step 1/4 : FROM antmicro/renode:latest
- ---> 19c08590e817
-Step 2/4 : LABEL maintainer="Pete Warden <petewarden@google.com>"
- ---> Using cache
- ---> 5a7770d3d3f5
-Step 3/4 : RUN apt-get update
- ---> Using cache
- ---> b807ab77eeb1
-Step 4/4 : RUN apt-get install -y curl git unzip make g++
- ---> Using cache
- ---> 8da1b2aa2438
-Successfully built 8da1b2aa2438
-Successfully tagged riscv_build:latest
-```
-
-Building micro_speech_test binary
-
--   Launch the Docker that we just created using: `docker run -it-v
-    /tmp/copybara_out:/workspace riscv_build:latest bash`
--   Enter the source root directory by running `cd /workspace`
--   Set the path to RISC-V tools: `export
-    PATH=${PATH}:/workspace/tensorflow/lite/experimental/micro/tools/make/downloads/riscv_toolchain/bin/`
--   Build the binary: `make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=riscv32_mcu`
-
-Launching Renode to test the binary, currently this set up is not automated.
-
--   Execute the binary on Renode: `renode -P 5000 --disable-xwt -e 's
-    @/workspace/tensorflow/lite/experimental/micro/testing/sifive_fe310.resc'`
-
-You should see the following log with the magic string `~~~ALL TEST PASSED~~~`:
-
-```
-02:25:22.2059 [DEBUG] uart0: [+17.25s host +80ms virt 80ms virt from start] core freq at 0 Hz
-02:25:22.2065 [DEBUG] uart0: [+0.61ms host +0s virt 80ms virt from start]   Testing TestInvoke
-02:25:22.4243 [DEBUG] uart0: [+0.22s host +0.2s virt 0.28s virt from start]   Ran successfully
-02:25:22.4244 [DEBUG] uart0: [+42µs host +0s virt 0.28s virt from start]
-02:25:22.4245 [DEBUG] uart0: [+0.15ms host +0s virt 0.28s virt from start]   1/1 tests passed
-02:25:22.4247 [DEBUG] uart0: [+62µs host +0s virt 0.28s virt from start]   ~~~ALL TESTS PASSED~~~
-02:25:22.4251 [DEBUG] uart0: [+8µs host +0s virt 0.28s virt from start]
-02:25:22.4252 [DEBUG] uart0: [+0.39ms host +0s virt 0.28s virt from start]
-02:25:22.4253 [DEBUG] uart0: [+0.16ms host +0s virt 0.28s virt from start]   Progam has exited with code:0x00000000
-```
-
-## Building for Ambiq Micro Apollo3Blue EVB using Make
-
-Follow these steps to get the micro_speech yes example working on Apollo 3 EVB:
-
-1.  Make sure to run the "Building Portable Reference Code using Make" section
-    before performing the following steps
-2.  Compile the project with the following command: make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=apollo3evb
-    micro_speech_bin
-3.  Install [Segger JLink tools](https://www.segger.com/downloads/jlink/)
-4.  Connect the Apollo3 EVB (with mic shield in slot 3 of Microbus Shield board)
-    to the computer and power it on.
-5.  Start the GDB server in a new terminal with the following command:
-    JLinkGDBServer -select USB -device AMA3B1KK-KBR -endian little -if SWD
-    -speed 1000 -noir -noLocalhostOnly
-    1.  The command has run successfully if you see the message "Waiting for GDB
-        connection"
-6.  Back in the original terminal, run the program via the debugger
-    1.  Navigate to
-        tensorflow/lite/experimental/micro/examples/micro_speech/apollo3evb
-    2.  Start gdb by entering the following command: arm-none-eabi-gdb
-    3.  Run the command script by entering the following command: source
-        micro_speech.cmd. This script does the following:
-        1.  Load the binary created in step 2
-        2.  Reset
-        3.  Begin program execution
-        4.  Press Ctrl+c to exit
-    4.  The EVB LEDs will indicate detection.
-        1.  LED0 (rightmost LED) - ON when Digital MIC interface is initialized
-        2.  LED1 - Toggles after each inference
-        3.  LED2 thru 4 - "Ramp ON" when "Yes" is detected
-    5.  Say "Yes"
-
-### Additional Apollo3 Instructions
-
-To flash a part with JFlash Lite, do the following:
-
-1.  At the command line: JFlashLiteExe
-2.  Device = AMA3B1KK-KBR
-3.  Interface = SWD at 1000 kHz
-4.  Data file =
-    `tensorflow/lite/experimental/micro/tools/make/gen/apollo3evb_cortex-m4/bin/micro_speech.bin`
-5.  Prog Addr = 0x0000C000
-
-## Building for the Eta Compute ECM3531 EVB using Make
-
-1.  Follow the instructions at
-    [Tensorflow Micro Speech](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech#getting-started)
-    to down load the Tensorflow source code and the support libraries \(but do
-    not run the make command shown there.\)
-2.  Download the Eta Compute SDK, version 0.0.17. Contact info@etacompute.com
-3.  You will need the Arm compiler arm-none-eabi-gcc, version 7.3.1
-    20180622, release ARM/embedded-7-branch revision 261907, 7-2018-q2-update.
-    This compiler is downloaded through make.
-4.  Edit the file
-    tensorflow/lite/experimental/micro/tools/make/targets/ecm3531_makefile.inc
-    so that the variables ETA_SDK and GCC_ARM point to the correct directories.
-5.  Compile the code with the command \
-    &nbsp;&nbsp;&nbsp;&nbsp;make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=ecm3531
-    TAGS="CMSIS" test \
-    This will produce a set of executables in the
-    tensorflow/lite/experimental/micro/tools/make/gen/ecm3531_cortex-m3/bin
-    directory.
-6.  To load an executable into SRAM \
-    &nbsp;&nbsp;&nbsp;&nbsp;Start ocd \
-    &nbsp;&nbsp;&nbsp;&nbsp;cd
-    tensorflow/lite/experimental/micro/tools/make/targets/ecm3531 \
-    &nbsp;&nbsp;&nbsp;&nbsp;./load_program name_of_executable, for e.g.,
-    ./load_program audio_provider_test \
-    &nbsp;&nbsp;&nbsp;&nbsp;Start PuTTY \(Connection type = Serial, Speed =
-    11520, Data bits = 8, Stop bits = 1, Parity = None\) \
-    The following output should appear: \
-    Testing TestAudioProvider \
-    Testing TestTimer \
-    2/2 tests passed \
-    \~\~\~ALL TESTS PASSED\~\~\~ \
-    Execution time \(msec\) = 7
-7.  To load into flash \
-    &nbsp;&nbsp;&nbsp;&nbsp;Edit the variable ETA_LDS_FILE in
-    tensorflow/lite/experimental/micro/tools/&nbsp;&nbsp;make/targets/ecm3531_makefile.inc
-    to point to the ecm3531_flash.lds file \
-    &nbsp;&nbsp;&nbsp;&nbsp;Recompile \( make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=ecm3531
-    TAGS="CMSIS" test\) \
-    &nbsp;&nbsp;&nbsp;&nbsp;cd
-    tensorflow/lite/experimental/micro/tools/make/targets/ecm3531 \
-    &nbsp;&nbsp;&nbsp;&nbsp;./flash_program executable_name to load into flash.
-
-## Implement target optimized kernels
-
-The reference kernels in tensorflow/lite/experimental/micro/kernels are
-implemented in pure C/C++. It might not utilize all HW architecture specific
-optimizations, such as DSP instructions etc. The instructions below provides an
-example on how to compile an external lib with HW architecture specific
-optimizations and link it with the microlite lib.
-
-### CMSIS-NN optimized kernels (---under development---)
-
-To utilize the CMSIS-NN optimized kernels, choose your target, e.g. Bluepill,
-and build with:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TAGS=cmsis-nn TARGET=bluepill test
-```
-
-That will build the microlite lib including CMSIS-NN optimized kernels based on
-the version downloaded by 'download_dependencies.sh', so make sure you have run
-this script. If you want to utilize another version of CMSIS, clone it to a
-custom location run the following command:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile CMSIS_PATH=<CUSTOM_LOCATION> TAGS=cmsis-nn TARGET=bluepill test
-```
-
-To test the optimized kernel(s) on your target platform using mbed (depthwise
-conv in this example), follow these steps:
-
-1.  Clone CMSIS to a custom location (<CUSTOM_LOCATION>) url:
-    https://github.com/ARM-software/CMSIS_5.git Make sure you're on the
-    development branch.
-2.  Generate the project for depthwise conv mbed test: `make -f
-    tensorflow/lite/experimental/micro/tools/make/Makefile TAGS=cmsis-nn
-    CMSIS_PATH=<CUSTOM_LOCATION> generate_depthwise_conv_test_mbed_project`
-3.  Go to the generated mbed folder: `cd
-    tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/depthwise_conv_test/mbed`
-4.  Follow the steps in README_MBED.md to setup the environment. Or simply do:
-    `mbed config root . mbed deploy python -c 'import fileinput, glob; for
-    filename in glob.glob("mbed-os/tools/profiles/*.json"): for line in
-    fileinput.input(filename, inplace=True):
-    print(line.replace("\"-std=gnu++98\"","\"-std=gnu++11\",
-    \"-fpermissive\""))'`
-5.  Compile and flash. The 'auto' flag requires your target to be plugged in.
-    `mbed compile -m auto -t GCC_ARM -f --source . --source
-    <CUSTOM_LOCATION>/CMSIS/NN/Include --source
-    <CUSTOM_LOCATION>/CMSIS/NN/Source --source
-    <CUSTOM_LOCATION>/CMSIS/DSP/Include --source
-    <CUSTOM_LOCATION>/CMSIS/Core/Include -DARM_MATH_DSP -DARM_MATH_LOOPUNROLL
-    -j8`
-
-## Goals
-
-The design goals are for the framework to be:
-
--   **Readable**: We want embedded software engineers to be able to understand
-    what's required to run ML inference without having to study research papers.
-    We've tried to keep the code base small, modular, and have reference
-    implementations of all operations to help with this.
-
--   **Easy to modify**: We know that there are a lot of different platforms and
-    requirements in the embedded world, and we don't expect to cover all of them
-    in one framework. Instead, we're hoping that it can be a good starting point
-    for developers to build on top of to meet their own needs. For example, we
-    tried to make it easy to replace the implementations of key computational
-    operators that are often crucial for performance, without having to touch
-    the data flow and other runtime code. We want it to make more sense to use
-    our workflow to handle things like model import and less-important
-    operations, and customize the parts that matter, rather than having to
-    reimplement everything in your own engine.
-
--   **Well-tested**: If you're modifying code, you need to know if your changes
-    are correct. Having an easy way to test lets you develop much faster. To
-    help there, we've written tests for all the components, and we've made sure
-    that the tests can be run on almost any platform, with no dependencies apart
-    from the ability to log text to a debug console somewhere. We also provide
-    an easy way to run all the tests on-device as part of an automated test
-    framework, and we use qemu/Renode emulation so that tests can be run even
-    without physical devices present.
-
--   **Easy to integrate**: We want to be as open a system as possible, and use
-    the best code available for each platform. To do that, we're going to rely
-    on projects like
-    [CMSIS-NN](https://www.keil.com/pack/doc/CMSIS/NN/html/index.html),
-    [uTensor](https://github.com/uTensor/uTensor), and other vendor libraries to
-    handle as much performance-critical code as possible. We know that there are
-    an increasing number of options to accelerate neural networks on
-    microcontrollers, so we're aiming to be a good host for deploying those
-    hardware technologies too.
-
--   **Compatible**: We're using the same file schema, interpreter API, and
-    kernel interface as regular TensorFlow Lite, so we leverage the large
-    existing set of tools, documentation, and examples for the project. The
-    biggest barrier to deploying ML models is getting them from a training
-    environment into a form that's easy to run inference on, so we see reusing
-    this rich ecosystem as being crucial to being easily usable. We also hope to
-    integrate this experimental work back into the main codebase in the future.
-
-To meet those goals, we've made some tradeoffs:
-
--   **Simple C++**: To help with readability, our code is written in a modern
-    version of C++, but we generally treat it as a "better C", rather relying on
-    more complex features such as template meta-programming. As mentioned
-    earlier, we avoid any use of dynamic memory allocation (new/delete) or the
-    standard C/C++ libraries, so we believe this should still be fairly
-    portable. It does mean that some older devices with C-only toolchains won't
-    be supported, but we're hoping that the reference operator implementations
-    (which are simple C-like functions) can still be useful in those cases. The
-    interfaces are also designed to be C-only, so it should be possible to
-    integrate the resulting library with pure C projects.
-
--   **Interpreted**: Code generation is a popular pattern for embedded code,
-    because it gives standalone code that's easy to modify and step through, but
-    we've chosen to go with an interpreted approach. In our internal
-    microcontroller work we've found that using an extremely stripped-down
-    interpreter with almost no dependencies gives us a lot of the same
-    advantages, but is easier to maintain. For example, when new updates come
-    out for the underlying library, you can just merge your local modifications
-    in a single step, rather than having to regenerate new code and then patch
-    in any changes you subsequently made. The coarse granularity of the
-    interpreted primitives means that each operation call typically takes
-    hundreds of thousands of instruction cycles at least, so we don't see
-    noticeable performance gains from avoiding what's essentially a single
-    switch statement at the interpreter level to call each operation. We're
-    still working on improving the packaging though, for example we're
-    considering having the ability to snapshot all the source files and headers
-    used for a particular model, being able to compile the code and data
-    together as a library, and then access it through a minimal set of C
-    interface calls which hide the underlying complexity.
-
--   **Flatbuffers**: We represent our models using
-    [the standard flatbuffer schema used by the rest of TensorFlow Lite](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs),
-    with the difference that we always keep it in read-only program memory
-    (typically flash) rather than relying on having a file system to read it
-    from. This is a good fit because flatbuffer's serialized format is designed
-    to be mapped into memory without requiring any extra memory allocations or
-    modifications to access it. All of the functions to read model values work
-    directly on the serialized bytes, and large sections of data like weights
-    are directly accessible as sequential C-style arrays of their data type,
-    with no strides or unpacking needed. We do get a lot of value from using
-    flatbuffers, but there is a cost in complexity. The flat buffer library code
-    is all inline
-    [inside the main headers](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_generated.h),
-    but it isn't straightforward to inspect their implementations, and the model
-    data structures aren't easy to comprehend from the debugger. The header for
-    the schema itself also has to be periodically updated when new information
-    is added to the file format, though we try to handle that transparently for
-    most developers by checking in a pre-generated version.
-
--   **Code Duplication**: Some of the code in this prototype largely duplicates
-    the logic in other parts of the TensorFlow Lite code base, for example the
-    operator wrappers. We've tried to keep share as much as we can between the
-    two interpreters, but there are some assumptions built into the original
-    runtime that make this difficult. We'll be working on modularizing the main
-    interpreter so that we can move to an entirely shared system.
-
-This initial preview release is designed to get early feedback, and is not
-intended to be a final product. It only includes enough operations to run a
-simple keyword recognition model, and the implementations are not optimized.
-We're hoping this will be a good way to get feedback and collaborate to improve
-the framework.
-
-## Generating Project Files
-
-It's not always easy or convenient to use a makefile-based build process,
-especially if you're working on a product that uses a different IDE for the rest
-of its code. To address that, it's possible to generate standalone project
-folders for various popular build systems. These projects are self-contained,
-with only the headers and source files needed by a particular binary, and
-include project files to make loading them into an IDE easy. These can be
-auto-generated for any target you can compile using the main Make system, using
-a command like this:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=mbed TAGS="disco_f746ng" generate_micro_speech_mbed_project
-```
-
-This will create a folder in
-`tensorflow/lite/experimental/micro/tools/make/gen/mbed_cortex-m4/prj/micro_speech_main_test/mbed`
-that contains the source and header files, some Mbed configuration files, and a
-README. You should then be able to copy this directory to another machine, and
-use it just like any other Mbed project. There's more information about project
-files [below](#working-with-generated-projects).
-
-## Generating Arduino Libraries
-
-It's possible to use the Arduino Desktop IDE to build TFL Micro targets for
-Arduino devices. The source code is packaged as a .zip archive that you can add
-in the IDE by going to Sketch->Include Library->Add .ZIP Library... Once you've
-added the library, you can then go to File->Examples->TensorFlowLite to find a
-simple sketches that you can use to build the examples.
-
-You can generate the zip file from the source code here in git by running the
-following command:
-
-```
-https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/tools/ci_build/test_arduino.sh
-```
-
-The resulting library can be found in `tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/tensorflow_lite.zip`.
-This generates a library that includes all of the examples as sketches, along
-with the framework code you need to run your own examples.
-
-## How to Port TensorFlow Lite Micro to a New Platform
-
-Are you a hardware or operating system provider looking to run machine learning
-on your platform? We're keen to help, and we've had experience helping other
-teams do the same thing, so here are our recommendations.
+The remainder of this document provides guidance on porting TensorFlow Lite for
+Microcontrollers to new platforms. You should read the
+[developer documentation](https://www.tensorflow.org/lite/microcontrollers)
+first.
 
 ### Requirements
 
@@ -653,7 +75,7 @@
     networks, so if a model sticks to these kind of quantized operations, no
     floating point instructions should be required or executed by the framework.
 
-### Getting Started
+### Getting started
 
 We recommend that you start trying to compile and run one of the simplest tests
 in the framework as your first step. The full TensorFlow codebase can seem
@@ -729,7 +151,7 @@
 but try increasing it if you are running into strange corruption issues that
 might be related to stack overwriting.
 
-### Optimizing for your Platform
+### Optimizing for your platform
 
 The default reference implementations in TensorFlow Lite Micro are written to be
 portable and easy to understand, not fast, so you'll want to replace performance
@@ -741,7 +163,7 @@
 useful to understand how optional components are handled inside the build
 system.
 
-### Code Module Organization
+### Code module organization
 
 We have adopted a system of small modules with platform-specific implementations
 to help with portability. Every module is just a standard `.h` header file
@@ -849,151 +271,7 @@
 equivalent list with specialized versions of those files swapped in if they
 exist.
 
-### Working with Generated Projects
-
-So far, I've recommended that you use the standalone generated projects for your
-system. You might be wondering why you're not just checking out the full
-[TensorFlow codebase from GitHub](https://github.com/tensorflow/tensorflow/)?
-The main reason is that there is a lot more diversity of architectures, IDEs,
-support libraries, and operating systems in the embedded world. Many of the
-toolchains require their own copy of source files, or a list of sources to be
-written to a project file. When a developer working on TensorFlow adds a new
-source file or changes its location, we can't expect her to update multiple
-different project files, many of which she may not have the right software to
-verify the change was correct. That means we have to rely on a central listing
-of source files (which in our case is held in the makefile), and then call a
-tool to generate other project files from those. We could ask embedded
-developers to do this process themselves after downloading the main source, but
-running the makefile requires a Linux system which may not be available, takes
-time, and involves downloading a lot of dependencies. That is why we've opted to
-make regular snapshots of the results of generating these projects for popular
-IDEs and platforms, so that embedded developers have a fast and friendly way to
-start using TensorFlow Lite for Microcontrollers.
-
-This does have the disadvantage that you're no longer working directly on the
-main repository, instead you have a copy that's outside of source control. We've
-tried to make the copy as similar to the main repo as possible, for example by
-keeping the paths of all source files the same, and ensuring that there are no
-changes between the copied files and the originals, but it still makes it
-tougher to sync as the main repository is updated. There are also multiple
-copies of the source tree, one for each target, so any change you make to one
-copy has to be manually propagated across all the other projects you care about.
-This doesn't matter so much if you're just using the projects as they are to
-build products, but if you want to support a new platform and have the changes
-reflected in the main code base, you'll have to do some extra work.
-
-As an example, think about the `DebugLog()` implementation we discussed adding
-for a new platform earlier. At this point, you have a new version of
-`debug_log.cc` that does what's required, but how can you share that with the
-wider community? The first step is to pick a tag name for your platform. This
-can either be the operating system (for example 'mbed'), the name of a device
-('bluepill'), or some other text that describes it. This should be a short
-string with no spaces or special characters. Log in or create an account on
-GitHub, fork the full
-[TensorFlow codebase](https://github.com/tensorflow/tensorflow/) using the
-'Fork' button on the top left, and then grab your fork by using a command like
-`git clone https://github.com/<your user name>/tensorflow`.
-
-You'll either need Linux, MacOS, or Windows with something like CygWin installed
-to run the next steps, since they involve building a makefile. Run the following
-commands from a terminal, inside the root of the source folder:
-
-```
-tensorflow/lite/experimental/micro/tools/make/download_dependencies.sh
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile generate_projects
-```
-
-This will take a few minutes, since it has to download some large toolchains for
-the dependencies. Once it has finished, you should see some folders created
-inside a path like
-`tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/`. The exact
-path depends on your host operating system, but you should be able to figure it
-out from all the copy commands. These folders contain the generated project and
-source files, with
-`tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/keil`
-containing the Keil uVision targets,
-`tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/mbed` with
-the Mbed versions, and so on.
-
-If you've got this far, you've successfully set up the project generation flow.
-Now you need to add your specialized implementation of `DebugLog()`. Start by
-creating a folder inside `tensorflow/lite/experimental/micro/` named after the
-tag you picked earlier. Put your `debug_log.cc` file inside this folder, and
-then run this command, with '<your tag>' replaced by the actual folder name:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TAGS="<your tag>" generate_projects
-```
-
-If your tag name actually refers to a whole target architecture, then you'll use
-TARGET or TARGET_ARCH instead. For example, here's how a simple RISC-V set of
-projects is generated:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET="riscv32_mcu" generate_projects
-```
-
-The way it works is the same as TAGS though, it just looks for specialized
-implementations with the same containing folder name.
-
-If you look inside the projects that have been created, you should see that the
-default `DebugLog()` implementation is no longer present at
-`tensorflow/lite/experimental/micro/debug_log.cc`, and instead
-`tensorflow/lite/experimental/micro/<your tag>/debug_log.cc` is being used. Copy
-over the generated project files and try building them in your own IDE. If
-everything works, then you're ready to submit your change.
-
-To do this, run something like:
-
-```
-git add tensorflow/lite/experimental/micro/<your tag>/debug_log.cc
-git commit -a -m "Added DebugLog() support for <your platform>"
-git push origin master
-```
-
-Then go back to `https://github.com/<your account>/tensorflow`, and choose "New
-Pull Request" near the top. You should then be able to go through the standard
-TensorFlow PR process to get your change added to the main repository, and
-available to the rest of the community!
-
-### Supporting a Platform with Makefiles
-
-The changes you've made so far will enable other developers using the generated
-projects to use your platform, but TensorFlow's continuous integration process
-uses makefiles to build frequently and ensure changes haven't broken the build
-process for different systems. If you are able to convert your build procedure
-into something that can be expressed by a makefile, then we can integrate your
-platform into our CI builds and make sure it continues to work.
-
-Fully describing how to do this is beyond the scope of this documentation, but
-the biggest needs are:
-
--   A command-line compiler that can be called for every source file.
--   A list of the arguments to pass into the compiler to build and link all
-    files.
--   The correct linker map files and startup assembler to ensure `main()` gets
-    called.
-
-### Supporting a Platform with Emulation Testing
-
-Integrating your platform into the makefile process should help us make sure
-that it continues to build, but it doesn't guarantee that the results of the
-build process will run correctly. Running tests is something we require to be
-able to say that TensorFlow officially supports a platform, since otherwise we
-can't guarantee that users will have a good experience when they try using it.
-Since physically maintaining a full set of all supported hardware devices isn't
-feasible, we rely on software emulation to run these tests. A good example is
-our
-[STM32F4 'Bluepill' support](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/testing/test_bluepill_binary.sh),
-which uses [Docker](https://www.docker.com/) and [Renode](https://renode.io/) to
-run built binaries in an emulator. You can use whatever technologies you want,
-the only requirements are that they capture the debug log output of the tests
-being run in the emulator, and parse them for the string that indicates the test
-was successful. These scripts need to run on Ubuntu 18.04, in a bash
-environment, though Docker is available if you need to install extra software or
-have other dependencies.
-
-### Implementing More Optimizations
+### Implementing more optimizations
 
 Clearly, getting debug logging support is only the beginning of the work you'll
 need to do on a particular platform. It's very likely that you'll want to
diff --git a/tensorflow/lite/experimental/micro/examples/hello_world/README.md b/tensorflow/lite/experimental/micro/examples/hello_world/README.md
index 89804d4..1f9cfeb 100644
--- a/tensorflow/lite/experimental/micro/examples/hello_world/README.md
+++ b/tensorflow/lite/experimental/micro/examples/hello_world/README.md
@@ -1,7 +1,7 @@
 # Hello World example
 
 This example is designed to demonstrate the absolute basics of using [TensorFlow
-Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview).
+Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers).
 It includes the full end-to-end workflow of training a model, converting it for
 use with TensorFlow Lite, and running inference on a microcontroller.
 
@@ -14,54 +14,21 @@
 
 ## Table of contents
 
--   [Getting started](#getting-started)
+-   [Understand the model](#understand-the-model)
 -   [Deploy to Arduino](#deploy-to-arduino)
 -   [Deploy to SparkFun Edge](#deploy-to-sparkfun-edge)
 -   [Deploy to STM32F746](#deploy-to-STM32F746)
+-   [Run the tests on a development machine](#run-the-tests-on-a-development-machine)
 
-## Getting started
-
-### Understand the model
+## Understand the model
 
 The sample comes with a pre-trained model. The code used to train and convert
-the model is available as a tutorial in [create_sine_model.ipynb](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb).
+the model is available as a tutorial in [create_sine_model.ipynb](https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb).
 
 Walk through this tutorial to understand what the model does,
 how it works, and how it was converted for use with TensorFlow Lite for
 Microcontrollers.
 
-### Build the code
-
-To compile and test this example on a desktop Linux or macOS machine, first
-clone the TensorFlow repository from GitHub to a convenient place:
-
-```bash
-git clone --depth 1 https://github.com/tensorflow/tensorflow.git
-```
-
-Next, `cd` into the source directory from a terminal, and then run the following
-command:
-
-```bash
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_hello_world_test
-```
-
-This will take a few minutes, and downloads frameworks the code uses like
-[CMSIS](https://developer.arm.com/embedded/cmsis) and
-[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
-finished, you should see a series of files get compiled, followed by some
-logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
-
-If you see this, it means that a small program has been built and run that loads
-the trained TensorFlow model, runs some example inputs through it, and got the
-expected outputs.
-
-To understand how TensorFlow Lite does this, you can look at the source in
-[hello_world_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc).
-It's a fairly small amount of code that creates an interpreter, gets a handle to
-a model that's been compiled into the program, and then invokes the interpreter
-with the model and sample inputs.
-
 ## Deploy to Arduino
 
 The following instructions will help you build and deploy this sample
@@ -71,6 +38,7 @@
 
 The sample has been tested with the following devices:
 
+- [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
 - [Arduino MKRZERO](https://store.arduino.cc/usa/arduino-mkrzero)
 
 The sample will use PWM to fade an LED on and off according to the model's
@@ -79,34 +47,11 @@
 LED is not attached to a pin with PWM capabilities. In this case, the LED will
 blink instead of fading.
 
-### Obtain and import the library
+### Install the Arduino_TensorFlowLite library
 
-To use this sample application with Arduino, we've created an Arduino library
-that includes it as an example that you can open in the Arduino Desktop IDE.
-
-Download the current nightly build of the library: [hello_world.zip](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/hello_world/hello_world.zip)
-
-Next, import this zip file into the Arduino Desktop IDE by going to `Sketch ->
-Include Library -> Add .ZIP Library...`.
-
-#### Building the library
-
-If you need to build the library from source (for example, if you're making
-modifications to the code), run this command to generate a zip file containing
-the required source files:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=arduino TAGS="" generate_hello_world_arduino_library_zip
-```
-
-A zip file will be created at the following location:
-
-```
-tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/hello_world/hello_world.zip
-```
-
-You can then import this zip file into the Arduino Desktop IDE by going to
-`Sketch -> Include Library -> Add .ZIP Library...`.
+This example application is included as part of the official TensorFlow Lite
+Arduino library. To install it, open the Arduino library manager in
+`Tools -> Manage Libraries...` and search for `Arduino_TensorFlowLite`.
 
 ### Load and run the example
 
@@ -114,7 +59,7 @@
 example near the bottom of the list named `TensorFlowLite:hello_world`. Select
 it and click `hello_world` to load the example.
 
-Use the Arduino Desktop IDE to build and upload the example. Once it is running,
+Use the Arduino IDE to build and upload the example. Once it is running,
 you should see the built-in LED on your device flashing.
 
 The Arduino Desktop IDE includes a plotter that we can use to display the sine
@@ -369,3 +314,33 @@
 To stop viewing the debug output with `screen`, hit `Ctrl+A`, immediately
 followed by the `K` key, then hit the `Y` key.
 
+### Run the tests on a development machine
+
+To compile and test this example on a desktop Linux or macOS machine, first
+clone the TensorFlow repository from GitHub to a convenient place:
+
+```bash
+git clone --depth 1 https://github.com/tensorflow/tensorflow.git
+```
+
+Next, `cd` into the source directory from a terminal, and then run the following
+command:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_hello_world_test
+```
+
+This will take a few minutes, and downloads frameworks the code uses. Once the
+process has finished, you should see a series of files get compiled, followed by
+some logging output from a test, which should conclude with
+`~~~ALL TESTS PASSED~~~`.
+
+If you see this, it means that a small program has been built and run that loads
+the trained TensorFlow model, runs some example inputs through it, and got the
+expected outputs.
+
+To understand how TensorFlow Lite does this, you can look at the source in
+[hello_world_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc).
+It's a fairly small amount of code that creates an interpreter, gets a handle to
+a model that's been compiled into the program, and then invokes the interpreter
+with the model and sample inputs.
diff --git a/tensorflow/lite/experimental/micro/examples/magic_wand/README.md b/tensorflow/lite/experimental/micro/examples/magic_wand/README.md
index da3576a..97f9bdd 100644
--- a/tensorflow/lite/experimental/micro/examples/magic_wand/README.md
+++ b/tensorflow/lite/experimental/micro/examples/magic_wand/README.md
@@ -13,41 +13,7 @@
 -   [Getting started](#getting-started)
 -   [Deploy to Arduino](#deploy-to-arduino)
 -   [Deploy to SparkFun Edge](#deploy-to-sparkfun-edge)
-
-## Getting started
-
-### Build the code
-
-To compile and test this example on a desktop Linux or macOS machine, first
-clone the TensorFlow repository from GitHub to a convenient place:
-
-```bash
-git clone --depth 1 https://github.com/tensorflow/tensorflow.git
-```
-
-Next, put this folder under the
-tensorflow/tensorflow/lite/experimental/micro/examples/ folder, then `cd` into
-the source directory from a terminal and run the following command:
-
-```bash
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_magic_wand_test
-```
-
-This will take a few minutes, and downloads frameworks the code uses like
-[CMSIS](https://developer.arm.com/embedded/cmsis) and
-[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
-finished, you should see a series of files get compiled, followed by some
-logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
-
-If you see this, it means that a small program has been built and run that loads
-the trained TensorFlow model, runs some example inputs through it, and got the
-expected outputs.
-
-To understand how TensorFlow Lite does this, you can look at the source in
-[hello_world_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc).
-It's a fairly small amount of code that creates an interpreter, gets a handle to
-a model that's been compiled into the program, and then invokes the interpreter
-with the model and sample inputs.
+-   [Run the tests on a development machine](#run-the-tests-on-a-development-machine)
 
 ## Deploy to Arduino
 
@@ -58,15 +24,11 @@
 
 - [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
 
-### Obtain and import the library
+### Install the Arduino_TensorFlowLite library
 
-To use this sample application with Arduino, we've created an Arduino library
-that includes it as an example that you can open in the Arduino Desktop IDE.
-
-Download the current nightly build of the library: [hello_world.zip](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/magic_wand/magic_wand.zip)
-
-Next, import this zip file into the Arduino Desktop IDE by going to `Sketch ->
-Include Library -> Add .ZIP Library...`.
+This example application is included as part of the official TensorFlow Lite
+Arduino library. To install it, open the Arduino library manager in
+`Tools -> Manage Libraries...` and search for `Arduino_TensorFlowLite`.
 
 ### Install and patch the accelerometer driver
 
@@ -349,3 +311,36 @@
 
 To stop viewing the debug output with `screen`, hit `Ctrl+A`, immediately
 followed by the `K` key, then hit the `Y` key.
+
+## Run the tests on a development machine
+
+To compile and test this example on a desktop Linux or macOS machine, first
+clone the TensorFlow repository from GitHub to a convenient place:
+
+```bash
+git clone --depth 1 https://github.com/tensorflow/tensorflow.git
+```
+
+Next, put this folder under the
+tensorflow/tensorflow/lite/experimental/micro/examples/ folder, then `cd` into
+the source directory from a terminal and run the following command:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_magic_wand_test
+```
+
+This will take a few minutes, and downloads frameworks the code uses like
+[CMSIS](https://developer.arm.com/embedded/cmsis) and
+[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
+finished, you should see a series of files get compiled, followed by some
+logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
+
+If you see this, it means that a small program has been built and run that loads
+the trained TensorFlow model, runs some example inputs through it, and got the
+expected outputs.
+
+To understand how TensorFlow Lite does this, you can look at the source in
+[hello_world_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc).
+It's a fairly small amount of code that creates an interpreter, gets a handle to
+a model that's been compiled into the program, and then invokes the interpreter
+with the model and sample inputs.
diff --git a/tensorflow/lite/experimental/micro/examples/micro_speech/README.md b/tensorflow/lite/experimental/micro/examples/micro_speech/README.md
index 5ddddfc..7b7c73b 100644
--- a/tensorflow/lite/experimental/micro/examples/micro_speech/README.md
+++ b/tensorflow/lite/experimental/micro/examples/micro_speech/README.md
@@ -1,4 +1,4 @@
-# Micro Speech example
+# Micro speech example
 
 This example shows how you can use TensorFlow Lite to run a 20 kilobyte neural
 network model to recognize keywords in speech. It's designed to run on systems
@@ -16,96 +16,15 @@
 ## Table of contents
 
 -   [Getting started](#getting-started)
--   [Run on macOS](#run-on-macos)
 -   [Deploy to Arduino](#deploy-to-arduino)
 -   [Deploy to SparkFun Edge](#deploy-to-sparkfun-edge)
 -   [Deploy to STM32F746](#deploy-to-STM32F746)
 -   [Deploy to NXP FRDM K66F](#deploy-to-nxp-frdm-k66f)
+-   [Run on macOS](#run-on-macos)
+-   [Run the tests on a development machine](#run-the-tests-on-a-development-machine)
 -   [Calculating the input to the neural network](#calculating-the-input-to-the-neural-network)
 -   [Train your own model](#train-your-own-model)
 
-
-## Getting started
-
-This code has been tested on the following devices:
-
-* [SparkFun Edge](https://sparkfun.com/products/15170)
-* [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
-* [ST Microelectronics STM32F746G Discovery kit](https://os.mbed.com/platforms/ST-Discovery-F746NG/)
-* [NXP FRDM K66F](https://www.nxp.com/design/development-boards/freedom-development-boards/mcu-boards/freedom-development-platform-for-kinetis-k66-k65-and-k26-mcus:FRDM-K66F)
-
-This readme contains instructions for building the code on Linux and macOS, and
-deploying the code to the above microcontroller platforms and macOS.
-
-### Build the tests
-
-To compile and test this example on a desktop Linux or macOS machine, download
-[the TensorFlow source code](https://github.com/tensorflow/tensorflow), `cd`
-into the source directory from a terminal, and then run the following command:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_micro_speech_test
-```
-
-This will take a few minutes, and downloads frameworks the code uses like
-[CMSIS](https://developer.arm.com/embedded/cmsis) and
-[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
-finished, you should see a series of files get compiled, followed by some
-logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
-
-If you see this, it means that a small program has been built and run that loads
-the trained TensorFlow model, runs some example inputs through it, and got the
-expected outputs.
-
-To understand how TensorFlow Lite does this, you can look at the source in
-[micro_speech_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_speech_test.cc).
-It's a fairly small amount of code that creates an interpreter, gets a handle to
-a model that's been compiled into the program, and then invokes the interpreter
-with the model and sample inputs.
-
-### Run on macOS
-
-The example contains an audio provider compatible with macOS. If you have access
-to a Mac, you can run the example on your development machine.
-
-First, use the following command to build it:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile micro_speech
-```
-
-Once the build completes, you can run the example with the following command:
-
-```
-tensorflow/lite/experimental/micro/tools/make/gen/osx_x86_64/bin/micro_speech
-```
-
-You might see a pop-up asking for microphone access. If so, grant it, and the
-program will start.
-
-Try saying "yes" and "no". You should see output that looks like the following:
-
-```
-Heard yes (201) @4056ms
-Heard no (205) @6448ms
-Heard unknown (201) @13696ms
-Heard yes (205) @15000ms
-Heard yes (205) @16856ms
-Heard unknown (204) @18704ms
-Heard no (206) @21000ms
-```
-
-The number after each detected word is its score. By default, the recognize
-commands component only considers matches as valid if their score is over 200,
-so all of the scores you see will be at least 200.
-
-The number after the score is the number of milliseconds since the program was
-started.
-
-If you don't see any output, make sure your Mac's internal microphone is
-selected in the Mac's *Sound* menu, and that its input volume is turned up high
-enough.
-
 ## Deploy to Arduino
 
 The following instructions will help you build and deploy this sample
@@ -120,34 +39,11 @@
 microphone, you'll need to implement your own +audio_provider.cc+. It also has a
 built-in LED, which is used to indicate that a word has been recognized.
 
-### Obtain and import the library
+### Install the Arduino_TensorFlowLite library
 
-To use this sample application with Arduino, we've created an Arduino library
-that includes it as an example that you can open in the Arduino IDE.
-
-Download the current nightly build of the library: [micro_speech.zip](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/micro_speech/micro_speech.zip)
-
-Next, import this zip file into the Arduino IDE by going to
-`Sketch -> Include Library -> Add .ZIP Library...`.
-
-#### Build the library
-
-If you need to build the library from source (for example, if you're making
-modifications to the code), run this command to generate a zip file containing
-the required source files:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=arduino TAGS="portable_optimized" generate_micro_speech_arduino_library_zip
-```
-
-A zip file will be created at the following location:
-
-```
-tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/micro_speech/micro_speech.zip
-```
-
-You can then import this zip file into the Arduino IDE by going to
-`Sketch -> Include Library -> Add .ZIP Library...`.
+This example application is included as part of the official TensorFlow Lite
+Arduino library. To install it, open the Arduino library manager in
+`Tools -> Manage Libraries...` and search for `Arduino_TensorFlowLite`.
 
 ### Load and run the example
 
@@ -502,6 +398,75 @@
     in black color. If there is no output on the serial port, you can connect
     headphone to headphone port to check if audio loopback path is working.
 
+## Run on macOS
+
+The example contains an audio provider compatible with macOS. If you have access
+to a Mac, you can run the example on your development machine.
+
+First, use the following command to build it:
+
+```
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile micro_speech
+```
+
+Once the build completes, you can run the example with the following command:
+
+```
+tensorflow/lite/experimental/micro/tools/make/gen/osx_x86_64/bin/micro_speech
+```
+
+You might see a pop-up asking for microphone access. If so, grant it, and the
+program will start.
+
+Try saying "yes" and "no". You should see output that looks like the following:
+
+```
+Heard yes (201) @4056ms
+Heard no (205) @6448ms
+Heard unknown (201) @13696ms
+Heard yes (205) @15000ms
+Heard yes (205) @16856ms
+Heard unknown (204) @18704ms
+Heard no (206) @21000ms
+```
+
+The number after each detected word is its score. By default, the recognize
+commands component only considers matches as valid if their score is over 200,
+so all of the scores you see will be at least 200.
+
+The number after the score is the number of milliseconds since the program was
+started.
+
+If you don't see any output, make sure your Mac's internal microphone is
+selected in the Mac's *Sound* menu, and that its input volume is turned up high
+enough.
+
+## Run the tests on a development machine
+
+To compile and test this example on a desktop Linux or macOS machine, download
+[the TensorFlow source code](https://github.com/tensorflow/tensorflow), `cd`
+into the source directory from a terminal, and then run the following command:
+
+```
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_micro_speech_test
+```
+
+This will take a few minutes, and downloads frameworks the code uses like
+[CMSIS](https://developer.arm.com/embedded/cmsis) and
+[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
+finished, you should see a series of files get compiled, followed by some
+logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
+
+If you see this, it means that a small program has been built and run that loads
+the trained TensorFlow model, runs some example inputs through it, and got the
+expected outputs.
+
+To understand how TensorFlow Lite does this, you can look at the source in
+[micro_speech_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_speech_test.cc).
+It's a fairly small amount of code that creates an interpreter, gets a handle to
+a model that's been compiled into the program, and then invokes the interpreter
+with the model and sample inputs.
+
 ## Calculating the input to the neural network
 
 The TensorFlow Lite model doesn't take in raw audio sample data. Instead it
diff --git a/tensorflow/lite/experimental/micro/examples/person_detection/README.md b/tensorflow/lite/experimental/micro/examples/person_detection/README.md
index e111fde..7cda234 100644
--- a/tensorflow/lite/experimental/micro/examples/person_detection/README.md
+++ b/tensorflow/lite/experimental/micro/examples/person_detection/README.md
@@ -8,42 +8,10 @@
 -   [Getting started](#getting-started)
 -   [Running on Arduino](#running-on-arduino)
 -   [Running on SparkFun Edge](#running-on-sparkfun-edge)
+-   [Run the tests on a development machine](#run-the-tests-on-a-development-machine)
 -   [Debugging image capture](#debugging-image-capture)
 -   [Training your own model](#training-your-own-model)
 
-## Getting started
-
-To compile and test this example on a desktop Linux or MacOS machine, download
-[the TensorFlow source code](https://github.com/tensorflow/tensorflow), `cd`
-into the source directory from a terminal, and then run the following command:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile
-```
-
-This will take a few minutes, and downloads frameworks the code uses like
-[CMSIS](https://developer.arm.com/embedded/cmsis) and
-[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
-finished, run:
-
-```
-make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_person_detection_test
-```
-
-You should see a series of files get compiled, followed by some logging output
-from a test, which should conclude with `~~~ALL TESTS PASSED~~~`. If you see
-this, it means that a small program has been built and run that loads a trained
-TensorFlow model, runs some example images through it, and got the expected
-outputs. This particular test runs images with a and without a person in them,
-and checks that the network correctly identifies them.
-
-To understand how TensorFlow Lite does this, you can look at the `TestInvoke()`
-function in
-[person_detection_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/person_detection/person_detection_test.cc).
-It's a fairly small amount of code, creating an interpreter, getting a handle to
-a model that's been compiled into the program, and then invoking the interpreter
-with the model and sample inputs.
-
 ## Running on Arduino
 
 The following instructions will help you build and deploy this sample
@@ -72,17 +40,13 @@
 |SDA|A4|
 |SCL|A5|
 
-### Obtain and import the library
+### Install the Arduino_TensorFlowLite library
 
-To use this sample application with Arduino, we've created an Arduino library
-that includes it as an example that you can open in the Arduino IDE.
+This example application is included as part of the official TensorFlow Lite
+Arduino library. To install it, open the Arduino library manager in
+`Tools -> Manage Libraries...` and search for `Arduino_TensorFlowLite`.
 
-Download the current nightly build of the library: [micro_speech.zip](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/micro_speech/micro_speech.zip)
-
-Next, import this zip file into the Arduino IDE by going to
-`Sketch -> Include Library -> Add .ZIP Library...`.
-
-### Install libraries
+### Install other libraries
 
 In addition to the TensorFlow library, you'll also need to install two
 libraries:
@@ -333,6 +297,39 @@
 To stop viewing the debug output with `screen`, hit `Ctrl+A`, immediately
 followed by the `K` key, then hit the `Y` key.
 
+## Run the tests on a development machine
+
+To compile and test this example on a desktop Linux or MacOS machine, download
+[the TensorFlow source code](https://github.com/tensorflow/tensorflow), `cd`
+into the source directory from a terminal, and then run the following command:
+
+```
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile
+```
+
+This will take a few minutes, and downloads frameworks the code uses like
+[CMSIS](https://developer.arm.com/embedded/cmsis) and
+[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
+finished, run:
+
+```
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_person_detection_test
+```
+
+You should see a series of files get compiled, followed by some logging output
+from a test, which should conclude with `~~~ALL TESTS PASSED~~~`. If you see
+this, it means that a small program has been built and run that loads a trained
+TensorFlow model, runs some example images through it, and got the expected
+outputs. This particular test runs images with a and without a person in them,
+and checks that the network correctly identifies them.
+
+To understand how TensorFlow Lite does this, you can look at the `TestInvoke()`
+function in
+[person_detection_test.cc](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/person_detection/person_detection_test.cc).
+It's a fairly small amount of code, creating an interpreter, getting a handle to
+a model that's been compiled into the program, and then invoking the interpreter
+with the model and sample inputs.
+
 ## Debugging image capture
 When the sample is running, check the LEDs to determine whether the inference is
 running correctly.  If the red light is stuck on, it means there was an error
diff --git a/tensorflow/lite/experimental/micro/examples/person_detection/training_a_model.md b/tensorflow/lite/experimental/micro/examples/person_detection/training_a_model.md
index ef652e0..24067fc 100644
--- a/tensorflow/lite/experimental/micro/examples/person_detection/training_a_model.md
+++ b/tensorflow/lite/experimental/micro/examples/person_detection/training_a_model.md
@@ -1,16 +1,16 @@
-== Training a model
+## Training a model
 
 The following document will walk you through the process of training your own
 250 KB embedded vision model using scripts that are easy to run. You can use
-either the https://arxiv.org/abs/1906.05721[Visual Wake Words dataset] for
-person detection, or choose one of the http://cocodataset.org/#explore[80
-categories from the MSCOCO dataset].
+either the [Visual Wake Words dataset](https://arxiv.org/abs/1906.05721) for
+person detection, or choose one of the [80
+categories from the MSCOCO dataset](http://cocodataset.org/#explore).
 
 This model will take several days to train on a powerful machine with GPUs. We
-recommend using a https://cloud.google.com/deep-learning-vm/[Google Cloud Deep
-Learning VM].
+recommend using a [Google Cloud Deep
+Learning VM](https://cloud.google.com/deep-learning-vm/).
 
-=== Training framework choice
+### Training framework choice
 
 Keras is the recommended interface for building models in TensorFlow, but when
 the person detector model was being created it didn't yet support all the
@@ -20,7 +20,7 @@
 Keras instructions in the future.
 
 The model definitions for Slim are part of the
-https://github.com/tensorflow/models[TensorFlow models repository], so to get
+[TensorFlow models repository](https://github.com/tensorflow/models), so to get
 started you'll need to download it from GitHub using a command like this:
 
 ```
@@ -55,16 +55,16 @@
 If you see import errors running the slim scripts, you should make sure the
 `PYTHONPATH` is set up correctly, and that contextlib2 has been installed. You
 can find more general information on tf.slim in the
-https://github.com/tensorflow/models/tree/master/research/slim[repository's
-README].
+[repository's
+README](https://github.com/tensorflow/models/tree/master/research/slim).
 
-=== Building the dataset
+### Building the dataset
 
 In order to train a person detector model, we need a large collection of images
 that are labeled depending on whether or not they have people in them. The
 ImageNet one-thousand class data that's widely used for training image
 classifiers doesn't include labels for people, but luckily the
-http://cocodataset.org/#home[COCO dataset] does. You can also download this
+[COCO dataset](http://cocodataset.org/#home) does. You can also download this
 data without manually registering too, and Slim provides a convenient script to
 grab it automatically:
 
@@ -103,13 +103,13 @@
 Don't be surprised if this takes up to twenty minutes to complete. When it's
 done, you'll have a set of TFRecords in `coco/processed` holding the labeled
 image information. This data was created by Aakanksha Chowdhery and is known as
-the https://arxiv.org/abs/1906.05721[Visual Wake Words dataset]. It's designed
+the [Visual Wake Words dataset](https://arxiv.org/abs/1906.05721). It's designed
 to be useful for benchmarking and testing embedded computer vision, since it
 represents a very common task that we need to accomplish with tight resource
 constraints. We're hoping to see it drive even better models for this and
 similar tasks.
 
-=== Training the model
+### Training the model
 
 One of the nice things about using tf.slim to handle the training is that the
 parameters you commonly need to modify are available as command line arguments,
@@ -158,7 +158,7 @@
 values from 0 to 255 integers into -1.0 to +1.0 floating point numbers (though
 we'll be quantizing those after training).
 - The
-https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/[HM01B0]
+[HM01B0](https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/)
 camera we're using on the SparkFun Edge board is monochrome, so to get the best
 results we have to train our model on black and white images too, so we pass in
 the `--input_grayscale` flag to enable that preprocessing.
@@ -204,7 +204,7 @@
 check back. This kind of variation is a lot easier to see in a graph, which is
 one of the main reasons to try TensorBoard.
 
-=== TensorBoard
+### TensorBoard
 
 TensorBoard is a web application that lets you view data visualizations from
 TensorFlow training sessions, and it's included by default in most cloud
@@ -220,36 +220,24 @@
 command line tool, and point your browser to http://localhost:6006 (or the
 address of the machine you're running it on).
 
-After navigating to the tensorboard address or opening the session through
-Google Cloud, you should see a page that looks something like this. It may take
-a little while for the graphs to have anything useful in them, since the script
-only saves summaries every five minutes. This screenshot shows the results
-after training for over a day. The most important graph is called 'clone_loss',
-and this shows the progression of the same loss value that's displayed on the
-logging output. As you can see in this example it fluctuates a lot, but the
+It may take a little while for the graphs to have anything useful in them, since
+the script only saves summaries every five minutes. The most important graph is
+called 'clone_loss', and this shows the progression of the same loss value
+that's displayed on the logging output. It fluctuates a lot, but the
 overall trend is downwards over time. If you don't see this sort of progression
 after a few hours of training, it's a good sign that your model isn't
 converging to a good solution, and you may need to debug what's going wrong
 either with your dataset or the training parameters.
 
-[[tensorboard_graphs]]
-.Example screenshot of graphs in Tensorboard
-image::images/ch10/tensorboard_graphs.png["Training graphs in Tensorboard"]
-
 Tensorboard defaults to the 'Scalars' tab when it opens, but the other section
-that can be useful during training is 'Images' (Figure 9-13). This shows a
+that can be useful during training is 'Images'. This shows a
 random selection of the pictures the model is currently being trained on,
-including any distortions and other preprocessing. In this screenshot you can
-see that the image has been flipped, and that it's been converted to grayscale
-before being fed to the model. This information isn't as essential as the loss
-graphs, but it can be useful to ensure the dataset is what you expect, and it
-is interesting to see the examples updating as training progresses.
+including any distortions and other preprocessing. This information isn't as
+essential as the loss graphs, but it can be useful to ensure the dataset is what
+you expect, and it is interesting to see the examples updating as training
+progresses.
 
-[[tensorboard_images]]
-.Example screenshot of images in Tensorboard
-image::images/ch10/tensorboard_images.png["Training images in Tensorboard"]
-
-=== Evaluating the model
+### Evaluating the model
 
 The loss function correlates with how well your model is training, but it isn't
 a direct, understandable metric. What we really care about is how many people
@@ -288,14 +276,14 @@
 a fully-trained model to achieve an accuracy of around 84% after one million
 steps, and show a loss of around 0.4.
 
-=== Exporting the model to TensorFlow Lite
+### Exporting the model to TensorFlow Lite
 
 When the model has trained to an accuracy you're happy with, you'll need to
 convert the results from the TensorFlow training environment into a form you
 can run on an embedded device. As we've seen in previous chapters, this can be
 a complex process, and tf.slim adds a few of its own wrinkles too.
 
-==== Exporting to a GraphDef protobuf file
+#### Exporting to a GraphDef protobuf file
 
 Slim generates the architecture from the model_name every time one of its
 scripts is run, so for a model to be used outside of Slim it needs to be saved
@@ -316,7 +304,7 @@
 your home folder. This contains the layout of the operations in the model, but
 doesn't yet have any of the weight data.
 
-==== Freezing the weights
+#### Freezing the weights
 
 The process of storing the trained weights together with the operation graph is
 known as freezing. This converts all of the variables in the graph to
@@ -337,7 +325,7 @@
 
 After this, you should see a file called 'vww_96_grayscale_frozen.pb'.
 
-==== Quantizing and converting to TensorFlow Lite
+#### Quantizing and converting to TensorFlow Lite
 
 Quantization is a tricky and involved process, and it's still very much an
 active area of research, so taking the float graph that we've trained so far
@@ -389,7 +377,7 @@
 open("vww_96_grayscale_quantized.tflite", "wb").write(tflite_quant_model)
 ```
 
-==== Converting into a C source file
+#### Converting into a C source file
 
 The converter writes out a file, but most embedded devices don't have a file
 system. To access the serialized data from our program, we have to compile it
@@ -406,7 +394,7 @@
 You can now replace the existing person_detect_model_data.cc file with the
 version you've trained, and be able to run your own model on embedded devices.
 
-=== Training for other categories
+### Training for other categories
 
 There are over 60 different object types in the MS-COCO dataset, so an easy way
 to customize your model would be to choose one of those instead of 'person'
@@ -433,9 +421,9 @@
 gathered, even if it's much smaller. We don't have an example of this
 yet, but we hope to share one soon.
 
-=== Understanding the architecture
+### Understanding the architecture
 
-https://arxiv.org/abs/1704.04861[MobileNets] are a family of architectures
+[MobileNets](https://arxiv.org/abs/1704.04861) are a family of architectures
 designed to provide good accuracy for as few weight parameters and arithmetic
 operations as possible. There are now multiple versions, but in our case we're
 using the original v1 since it required the smallest amount of RAM at runtime.
diff --git a/tensorflow/lite/g3doc/_book.yaml b/tensorflow/lite/g3doc/_book.yaml
index ae921c0..8cacc92 100644
--- a/tensorflow/lite/g3doc/_book.yaml
+++ b/tensorflow/lite/g3doc/_book.yaml
@@ -93,13 +93,14 @@
 
       - heading: "Microcontrollers"
       - title: "Overview"
-        path: /lite/microcontrollers/overview
+        path: /lite/microcontrollers
       - title: "Get started with microcontrollers"
         path: /lite/microcontrollers/get_started
-      - title: "Build and convert models"
-        path: /lite/microcontrollers/build_convert
       - title: "Understand the C++ library"
         path: /lite/microcontrollers/library
+      - title: "Build and convert models"
+        path: /lite/microcontrollers/build_convert
+
 
     - name: "Examples"
       contents:
diff --git a/tensorflow/lite/g3doc/guide/get_started.md b/tensorflow/lite/g3doc/guide/get_started.md
index a2c18a8..50f6d03 100644
--- a/tensorflow/lite/g3doc/guide/get_started.md
+++ b/tensorflow/lite/g3doc/guide/get_started.md
@@ -223,9 +223,9 @@
 
 ### Microcontrollers
 
-[TensorFlow Lite for Microcontrollers](../microcontrollers/overview.md) is an
-experimental port of TensorFlow Lite aimed at microcontrollers and other devices
-with only kilobytes of memory.
+[TensorFlow Lite for Microcontrollers](../microcontrollers) is an experimental
+port of TensorFlow Lite aimed at microcontrollers and other devices with only
+kilobytes of memory.
 
 ### Operations
 
diff --git a/tensorflow/lite/g3doc/guide/index.md b/tensorflow/lite/g3doc/guide/index.md
index 2475c7e..bb65823 100644
--- a/tensorflow/lite/g3doc/guide/index.md
+++ b/tensorflow/lite/g3doc/guide/index.md
@@ -34,7 +34,9 @@
 
 ## Get started
 
-To begin working with TensorFlow Lite, visit [Get started](get_started.md).
+To begin working with TensorFlow Lite on mobile devices, visit
+[Get started](get_started.md). If you want to deploy TensorFlow Lite models to
+microcontrollers, visit [Microcontrollers](../microcontrollers).
 
 ## Key features
 
@@ -115,7 +117,6 @@
     TensorFlow Lite.
 *   If you're a mobile developer, visit [Android quickstart](android.md) or
     [iOS quickstart](ios.md).
-*   Learn about
-    [TensorFlow Lite for Microcontrollers](../microcontrollers/overview.md).
+*   Learn about [TensorFlow Lite for Microcontrollers](../microcontrollers).
 *   Explore our [pre-trained models](../models).
 *   Try our [example apps](https://www.tensorflow.org/lite/examples).
diff --git a/tensorflow/lite/g3doc/microcontrollers/build_convert.md b/tensorflow/lite/g3doc/microcontrollers/build_convert.md
index 92fba24..42a35d6 100644
--- a/tensorflow/lite/g3doc/microcontrollers/build_convert.md
+++ b/tensorflow/lite/g3doc/microcontrollers/build_convert.md
@@ -10,9 +10,9 @@
 guidance on designing and training a model to fit in limited memory.
 
 For an end-to-end, runnable example of building and converting a model, see the
-following Jupyter notebook:
+following Colab which is part of the *Hello World* example:
 
-<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb">create_sine_model.ipynb</a>
+<a class="button button-primary" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb">create_sine_model.ipynb</a>
 
 ## Model conversion
 
@@ -38,9 +38,9 @@
 ```python
 import tensorflow as tf
 converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
-converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
-tflite_quant_model = converter.convert()
-open("converted_model.tflite", "wb").write(tflite_quant_model)
+converter.optimizations = [tf.lite.Optimize.DEFAULT]
+quantized_model = converter.convert()
+open("converted_model.tflite", "wb").write(quantized_model)
 ```
 
 ### Convert to a C array
@@ -71,8 +71,8 @@
 efficiency on embedded platforms.
 
 For an example of how to include and use a model in your program, see
-[`tiny_conv_micro_features_model_data.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/tiny_conv_micro_features_model_data.h)
-in the micro speech example.
+[`sine_model_data.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/sine_model_data.cc)
+in the *Hello World* example.
 
 ## Model architecture and training
 
diff --git a/tensorflow/lite/g3doc/microcontrollers/get_started.md b/tensorflow/lite/g3doc/microcontrollers/get_started.md
index 535bcc4..375d0c1 100644
--- a/tensorflow/lite/g3doc/microcontrollers/get_started.md
+++ b/tensorflow/lite/g3doc/microcontrollers/get_started.md
@@ -1,104 +1,90 @@
 # Get started with microcontrollers
 
-This document will help you start working with TensorFlow Lite for
-Microcontrollers.
+This document will help you get started using TensorFlow Lite for
+Microcontrollers. It explains how to run the framework's example applications,
+then walks through the code for a simple application that runs inference on a
+microcontroller.
 
-Start by reading through and running our [Examples](#examples).
+## Get a supported device
 
-Note: If you need a device to get started, we recommend the
-[SparkFun Edge Powered by TensorFlow](https://www.sparkfun.com/products/15170).
-It was designed in conjunction with the TensorFlow Lite team to offer a flexible
-platform for experimenting with deep learning on microcontrollers.
+To follow this guide, you'll need a supported hardware device. The example
+application we'll be using has been tested on the following devices:
 
-For a walkthrough of the code required to run inference, see the *Run inference*
-section below.
+*   [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
+    (using Arduino IDE)
+*   [SparkFun Edge](https://www.sparkfun.com/products/15170) (building directly
+    from source)
+*   [STM32F746 Discovery kit](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
+    (using Mbed)
 
-## Examples
+Learn more about supported platforms in
+[TensorFlow Lite for Microcontrollers](index.md).
 
-There are several examples that demonstrate how to build embedded machine
-learning applications with TensorFlow Lite:
+## Explore the examples
 
-### Hello World example
+TensorFlow Lite for Microcontrollers comes with several example applications
+that demonstrate its use for various tasks. At the time of writing, the
+following are available:
+
+*   [Hello World](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world) -
+    Demonstrates the absolute basics of using TensorFlow Lite for
+    Microcontrollers
+*   [Micro speech](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech) -
+    Captures audio with a microphone in order to detect the words "yes" and "no"
+*   [Person detection](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/person_detection) -
+    Captures camera data with an image sensor in order to detect the presence or
+    absence of a person
+*   [Magic wand](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/magic_wand) -
+    Captures accelerometer data in order to classify three different physical
+    gestures
+
+Each example application has a `README.md` file that explains how it can be
+deployed to its supported platforms.
+
+The rest of this guide walks through the
+[Hello World](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world)
+example application.
+
+## The Hello World example
 
 This example is designed to demonstrate the absolute basics of using TensorFlow
 Lite for Microcontrollers. It includes the full end-to-end workflow of training
 a model, converting it for use with TensorFlow Lite, and running inference on a
 microcontroller.
 
-In the example, a model is trained to replicate a sine function. When deployed
-to a microcontroller, its predictions are used to either blink LEDs or control
-an animation.
+In the example, a model is trained to replicate a sine function. It takes a
+single number as its input, and outputs the number's
+[sine](https://en.wikipedia.org/wiki/Sine). When deployed to a microcontroller,
+its predictions are used to either blink LEDs or control an animation.
 
-<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world">Hello
-World example</a>
+The example includes the following:
 
-The example code includes a Jupyter notebook that demonstrates how the model is
-trained and converted:
+*   A Jupyter notebook that demonstrates how the model is trained and converted
+*   A C++ 11 application that runs inference using the model, tested to work
+    with Arduino, SparkFun Edge, STM32F746G discovery kit, and macOS
+*   A unit test that demonstrates the process of running inference
 
-<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/create_sine_model.ipynb">create_sine_model.ipynb</a>
+### Run the example
 
-The process of building and converting a model is also covered in the guide
-[Build and convert models](build_convert.md).
+To run the example on your device, walk through the instructions in the
+`README.md`:
 
-To see how inference is performed, take a look at
-[hello_world_test.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc).
+<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/hello_world/README.md">Hello
+World README.md</a>
 
-The example is tested on the following platforms:
+## How to run inference
 
--   [SparkFun Edge Powered by TensorFlow (Apollo3 Blue)](https://www.sparkfun.com/products/15170)
--   [Arduino MKRZERO](https://store.arduino.cc/usa/arduino-mkrzero)
--   [STM32F746G Discovery Board](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
--   Mac OS X
+The following section walks through the *Hello World* example's
+[`hello_world_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/hello_world_test.cc),
+which demonstrates how to run inference using TensorFlow Lite for
+Microcontrollers.
 
-### Micro Speech example
+The test loads the model and then uses it to run inference several times.
 
-This example uses a simple
-[audio recognition model](https://www.tensorflow.org/tutorials/sequences/audio_recognition)
-to identify keywords in speech. The sample code captures audio from a device's
-microphones. The model classifies this audio in real time, determining whether
-the word "yes" or "no" has been spoken.
+### Include the library headers
 
-<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech">Micro
-Speech example</a>
-
-The [Run inference](#run_inference) section walks through the code of the Micro
-Speech sample and explains how it works.
-
-The example is tested on the following platforms:
-
--   [SparkFun Edge Powered by TensorFlow (Apollo3 Blue)](https://www.sparkfun.com/products/15170)
--   [STM32F746G Discovery Board](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
--   Mac OS X
-
-Note: To get started using the SparkFun Edge board, we recommend following
-[Machine learning on a microcontroller with SparkFun TensorFlow](https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow),
-a codelab that introduces you to the development workflow using the Micro Speech
-example.
-
-### Micro Vision example
-
-This example shows how you can use TensorFlow Lite to run a 250 kilobyte neural
-network to recognize people in images captured by a camera. It is designed to
-run on systems with small amounts of memory such as microcontrollers and DSPs.
-
-<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/person_detection">Person
-detection example</a>
-
-The example is tested on the following platforms:
-
--   [SparkFun Edge Powered by TensorFlow (Apollo3 Blue)](https://www.sparkfun.com/products/15170)
--   [STM32F746G Discovery Board](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
--   Mac OS X
-
-## Run inference
-
-The following section walks through the [Micro Speech](#micro_speech) sample's
-[main.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/main.cc)
-and explains how it used TensorFlow Lite for Microcontrollers to run inference.
-
-### Includes
-
-To use the library, we must include the following header files:
+To use the TensorFlow Lite for Microcontrollers library, we must include the
+following header files:
 
 ```C++
 #include "tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h"
@@ -113,30 +99,43 @@
 -   [`micro_error_reporter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/micro_error_reporter.h)
     outputs debug information.
 -   [`micro_interpreter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/micro_interpreter.h)
-    contains code to handle and run models.
+    contains code to load and run models.
 -   [`schema_generated.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_generated.h)
     contains the schema for the TensorFlow Lite
     [`FlatBuffer`](https://google.github.io/flatbuffers/) model file format.
 -   [`version.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/version.h)
     provides versioning information for the TensorFlow Lite schema.
 
-The sample also includes some other files. These are the most significant:
+### Include the model
+
+The TensorFlow Lite for Microcontrollers interpreter expects the model to be
+provided as a C++ array. In the *Hello World* example, the model is defined in
+`sine_model_data.h` and `sine_model_data.cc`. The header is included with the
+following line:
 
 ```C++
-#include "tensorflow/lite/experimental/micro/examples/micro_speech/feature_provider.h"
-#include "tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/micro_model_settings.h"
-#include "tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/tiny_conv_micro_features_model_data.h"
+#include "tensorflow/lite/experimental/micro/examples/hello_world/sine_model_data.h"
 ```
 
--   [`feature_provider.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/feature_provider.h)
-    contains code to extract features from the audio stream to input to the
-    model.
--   [`tiny_conv_micro_features_model_data.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/tiny_conv_micro_features_model_data.h)
-    contains the model stored as a `char` array. Read
-    [Build and convert models](build_convert.md) to learn how to convert a
-    TensorFlow Lite model into this format.
--   [`micro_model_settings.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/micro_model_settings.h)
-    defines various constants related to the model.
+### Set up the unit test
+
+The code we are walking through is a unit test that uses the TensorFlow Lite for
+Microcontrollers unit test framework. To load the framework, we include the
+following file:
+
+```C++
+#include "tensorflow/lite/experimental/micro/testing/micro_test.h"
+```
+
+The test is defined using the following macros:
+
+```C++
+TF_LITE_MICRO_TESTS_BEGIN
+
+TF_LITE_MICRO_TEST(LoadModelAndPerformInference) {
+```
+
+The remainder of the code demonstrates how to load the model and run inference.
 
 ### Set up logging
 
@@ -155,20 +154,17 @@
 
 ### Load a model
 
-In the following code, the model is instantiated from a `char` array,
-`g_tiny_conv_micro_features_model_data` (to learn how this is created, see
-[Build and convert models](build_convert.md)). We then check the model to ensure
-its schema version is compatible with the version we are using:
+In the following code, the model is instantiated using data from a `char` array,
+`g_sine_model_data`, which is declared in `sine_model_data.h`. We then check the
+model to ensure its schema version is compatible with the version we are using:
 
 ```C++
-const tflite::Model* model =
-    ::tflite::GetModel(g_tiny_conv_micro_features_model_data);
+const tflite::Model* model = ::tflite::GetModel(g_sine_model_data);
 if (model->version() != TFLITE_SCHEMA_VERSION) {
   error_reporter->Report(
       "Model provided is schema version %d not equal "
       "to supported version %d.\n",
       model->version(), TFLITE_SCHEMA_VERSION);
-  return 1;
 }
 ```
 
@@ -176,29 +172,35 @@
 
 An
 [`AllOpsResolver`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h)
-instance is required by the interpreter to access TensorFlow operations. This
-class can be extended to add custom operations to your project:
+instance is declared. This will be used by the interpreter to access the
+operations that are used by the model:
 
 ```C++
 tflite::ops::micro::AllOpsResolver resolver;
 ```
 
+The `AllOpsResolver` loads all of the operations available in TensorFlow Lite
+for Microcontrollers, which uses a lot of memory. Since a given model will only
+use a subset of these operations, it's recommended that real world applications
+load only the operations that are needed.
+
+This is done using a different class, `MicroMutableOpResolver`. You can see how
+to use it in the *Micro speech* example's
+[`micro_speech_test.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_speech_test.cc).
+
 ### Allocate memory
 
 We need to preallocate a certain amount of memory for input, output, and
 intermediate arrays. This is provided as a `uint8_t` array of size
-`tensor_arena_size`, which is passed into a `tflite::SimpleTensorAllocator`
-instance:
+`tensor_arena_size`:
 
 ```C++
-const int tensor_arena_size = 10 * 1024;
+const int tensor_arena_size = 2 * 1024;
 uint8_t tensor_arena[tensor_arena_size];
-tflite::SimpleTensorAllocator tensor_allocator(tensor_arena,
-                                               tensor_arena_size);
 ```
 
-Note: The size required will depend on the model you are using, and may need to
-be determined by experimentation.
+The size required will depend on the model you are using, and may need to be
+determined by experimentation.
 
 ### Instantiate interpreter
 
@@ -206,64 +208,63 @@
 created earlier:
 
 ```C++
-tflite::MicroInterpreter interpreter(model, resolver, &tensor_allocator,
-                                     error_reporter);
+tflite::MicroInterpreter interpreter(model, resolver, tensor_arena,
+                                     tensor_arena_size, error_reporter);
+```
+
+### Allocate tensors
+
+We tell the interpreter to allocate memory from the `tensor_arena` for the
+model's tensors:
+
+```C++
+interpreter.AllocateTensors();
 ```
 
 ### Validate input shape
 
 The `MicroInterpreter` instance can provide us with a pointer to the model's
 input tensor by calling `.input(0)`, where `0` represents the first (and only)
-input tensor. We inspect this tensor to confirm that its shape and type are what
-we are expecting:
+input tensor:
 
 ```C++
-TfLiteTensor* model_input = interpreter.input(0);
-if ((model_input->dims->size != 4) || (model_input->dims->data[0] != 1) ||
-    (model_input->dims->data[1] != kFeatureSliceCount) ||
-    (model_input->dims->data[2] != kFeatureSliceSize) ||
-    (model_input->type != kTfLiteUInt8)) {
-  error_reporter->Report("Bad input tensor parameters in model");
-  return 1;
-}
+  // Obtain a pointer to the model's input tensor
+  TfLiteTensor* input = interpreter.input(0);
 ```
 
-In this snippet, the variables `kFeatureSliceCount` and `kFeatureSliceSize`
-relate to properties of the input and are defined in
-[`micro_model_settings.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/micro_model_settings.h).
-The enum value `kTfLiteUInt8` is a reference to one of the TensorFlow Lite data
-types, and is defined in
+We then inspect this tensor to confirm that its shape and type are what we are
+expecting:
+
+```C++
+// Make sure the input has the properties we expect
+TF_LITE_MICRO_EXPECT_NE(nullptr, input);
+// The property "dims" tells us the tensor's shape. It has one element for
+// each dimension. Our input is a 2D tensor containing 1 element, so "dims"
+// should have size 2.
+TF_LITE_MICRO_EXPECT_EQ(2, input->dims->size);
+// The value of each element gives the length of the corresponding tensor.
+// We should expect two single element tensors (one is contained within the
+// other).
+TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
+TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
+// The input is a 32 bit floating point value
+TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, input->type);
+```
+
+The enum value `kTfLiteFloat32` is a reference to one of the TensorFlow Lite
+data types, and is defined in
 [`c_api_internal.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api_internal.h).
 
-### Generate features
+### Provide an input value
 
-The data we input to our model must be generated from the microcontroller's
-audio input. The `FeatureProvider` class defined in
-[`feature_provider.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/feature_provider.h)
-captures audio and converts it into a set of features that will be passed into
-the model. When it is instantiated, we use the `TfLiteTensor` obtained earlier
-to pass in a pointer to the input array. This is used by the `FeatureProvider`
-to populate the input data that will be passed into the model:
+To provide an input to the model, we set the contents of the input tensor, as
+follows:
 
 ```C++
-  FeatureProvider feature_provider(kFeatureElementCount,
-                                   model_input->data.uint8);
+input->data.f[0] = 0.;
 ```
 
-The following code causes the `FeatureProvider` to generate a set of features
-from the most recent second of audio and populate the input tensor:
-
-```C++
-TfLiteStatus feature_status = feature_provider.PopulateFeatureData(
-    error_reporter, previous_time, current_time, &how_many_new_slices);
-```
-
-In the sample, feature generation and inference happens in a loop, so the device
-is constantly capturing and processing new audio.
-
-If you are writing your own program, you will likely generate features in a
-different way, but you will always populate the input tensor with data before
-running the model.
+In this case, we input a floating point value representing `0`.
 
 ### Run the model
 
@@ -273,8 +274,7 @@
 ```C++
 TfLiteStatus invoke_status = interpreter.Invoke();
 if (invoke_status != kTfLiteOk) {
-  error_reporter->Report("Invoke failed");
-  return 1;
+  error_reporter->Report("Invoke failed\n");
 }
 ```
 
@@ -283,42 +283,83 @@
 [`c_api_internal.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api_internal.h),
 are `kTfLiteOk` and `kTfLiteError`.
 
+The following code asserts that the value is `kTfLiteOk`, meaning inference was
+successfully run.
+
+```C++
+TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, invoke_status);
+```
+
 ### Obtain the output
 
 The model's output tensor can be obtained by calling `output(0)` on the
 `tflite::MicroIntepreter`, where `0` represents the first (and only) output
 tensor.
 
-In the sample, the output is an array representing the probability of the input
-belonging to various classes (representing "yes", "no", "unknown", and
-"silence"). Since they are in a set order, we can use simple logic to determine
-which class has the highest probability:
+In the example, the model's output is a single floating point value contained
+within a 2D tensor:
 
 ```C++
-    TfLiteTensor* output = interpreter.output(0);
-    uint8_t top_category_score = 0;
-    int top_category_index;
-    for (int category_index = 0; category_index < kCategoryCount;
-         ++category_index) {
-      const uint8_t category_score = output->data.uint8[category_index];
-      if (category_score > top_category_score) {
-        top_category_score = category_score;
-        top_category_index = category_index;
-      }
-    }
+TfLiteTensor* output = interpreter.output(0);
+TF_LITE_MICRO_EXPECT_EQ(2, output->dims->size);
+TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[0]);
+TF_LITE_MICRO_EXPECT_EQ(1, input->dims->data[1]);
+TF_LITE_MICRO_EXPECT_EQ(kTfLiteFloat32, output->type);
 ```
 
-Elsewhere in the sample, a more sophisticated algorithm is used to smooth
-recognition results across a number of frames. This is defined in
-[recognize_commands.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/recognize_commands.h).
-The same technique can be used to improve reliability when processing any
-continuous stream of data.
+We can read the value directly from the output tensor and assert that it is what
+we expect:
+
+```C++
+// Obtain the output value from the tensor
+float value = output->data.f[0];
+// Check that the output value is within 0.05 of the expected value
+TF_LITE_MICRO_EXPECT_NEAR(0., value, 0.05);
+```
+
+### Run inference again
+
+The remainder of the code runs inference several more times. In each instance,
+we assign a value to the input tensor, invoke the interpreter, and read the
+result from the output tensor:
+
+```C++
+input->data.f[0] = 1.;
+interpreter.Invoke();
+value = output->data.f[0];
+TF_LITE_MICRO_EXPECT_NEAR(0.841, value, 0.05);
+
+input->data.f[0] = 3.;
+interpreter.Invoke();
+value = output->data.f[0];
+TF_LITE_MICRO_EXPECT_NEAR(0.141, value, 0.05);
+
+input->data.f[0] = 5.;
+interpreter.Invoke();
+value = output->data.f[0];
+TF_LITE_MICRO_EXPECT_NEAR(-0.959, value, 0.05);
+```
+
+### Read the application code
+
+Once you have walked through this unit test, you should be able to understand
+the example's application code, located in
+[`main_functions.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/main_functions.cc).
+It follows a similar process, but generates an input value based on how many
+inferences have been run, and calls a device-specific function that displays the
+model's output to the user.
 
 ## Next steps
 
-Once you have built and run the samples, read the following documents:
+To understand how the library can be used with a variety of models and
+applications, we recommend deploying the other examples and walking through
+their code.
 
-*   Learn how to work with models in
-    [Build and convert models](build_convert.md).
-*   Learn more about the C++ library in
-    [Understand the C++ library](library.md).
+<a class="button button-primary" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples">Example
+applications on GitHub</a>
+
+To learn how to use the library in your own project, read
+[Understand the C++ library](library.md).
+
+For information about training and convert models for deployment on
+microcontrollers, read [Build and convert models](build_convert.md).
diff --git a/tensorflow/lite/g3doc/microcontrollers/index.md b/tensorflow/lite/g3doc/microcontrollers/index.md
new file mode 100644
index 0000000..1ed1399
--- /dev/null
+++ b/tensorflow/lite/g3doc/microcontrollers/index.md
@@ -0,0 +1,113 @@
+# TensorFlow Lite for Microcontrollers
+
+TensorFlow Lite for Microcontrollers is an experimental port of TensorFlow Lite
+designed to run machine learning models on microcontrollers and other devices
+with only kilobytes of memory.
+
+It doesn't require operating system support, any standard C or C++ libraries, or
+dynamic memory allocation. The core runtime fits in 16 KB on an Arm Cortex M3,
+and with enough operators to run a speech keyword detection model, takes up a
+total of 22 KB.
+
+There are example applications demonstrating the use of microcontrollers for
+tasks including wake word detection, gesture classification from accelerometer
+data, and image classification using camera data.
+
+## Get started
+
+To try the example applications and learn how to use the API, read
+[Get started with microcontrollers](get_started.md).
+
+## Supported platforms
+
+TensorFlow Lite for Microcontrollers is written in C++ 11 and requires a 32-bit
+platform. It has been test extensively with many processors based on the
+[Arm Cortex-M Series](https://developer.arm.com/ip-products/processors/cortex-m)
+architecture, and has been ported to other architectures including
+[ESP32](https://www.espressif.com/en/products/hardware/esp32/overview).
+
+The framework is available as an Arduino library. It can also generate projects
+for development environments such as Mbed. It is open source and can be included
+in any C++ 11 project.
+
+There are example applications available for the following development boards:
+
+*   [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
+*   [SparkFun Edge](https://www.sparkfun.com/products/15170)
+*   [STM32F746 Discovery kit](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)
+
+To learn more about the libraries and examples, see
+[Get started with microcontrollers](get_started.md).
+
+## Why microcontrollers are important
+
+Microcontrollers are typically small, low-powered computing devices that are
+often embedded within hardware that requires basic computation, including
+household appliances and Internet of Things devices. Billions of
+microcontrollers are manufactured each year.
+
+Microcontrollers are often optimized for low energy consumption and small size,
+at the cost of reduced processing power, memory, and storage. Some
+microcontrollers have features designed to optimize performance on machine
+learning tasks.
+
+By running machine learning inference on microcontrollers, developers can add AI
+to a vast range of hardware devices without relying on network connectivity,
+which is often subject to bandwidth and power constraints and results in high
+latency. Running inference on-device can also help preserve privacy, since no
+data has to leave the device.
+
+## Developer workflow
+
+To deploy a TensorFlow model to a microcontroller, you will need to follow this
+process:
+
+1.  **Create or obtain a TensorFlow model**
+
+    The model must be small enough to fit on your target device after
+    conversion, and it can only use
+    [supported operations](build_convert.md#operation-support). If you want to
+    use operations that are not currently supported, you can provide your own
+    implementations.
+
+2.  **Convert the model to a TensorFlow Lite FlatBuffer**
+
+    You will convert your model into the standard TensorFlow Lite format using
+    the [TensorFlow Lite converter](build_convert.md#model-conversion). You may
+    wish to output a quantized model, since these are smaller in size and more
+    efficient to execute.
+
+3.  **Convert the FlatBuffer to a C byte array**
+
+    Models are kept in read-only program memory and provided in the form of a
+    simple C file. Standard tools can be used to
+    [convert the FlatBuffer into a C array](build_convert.md#convert-to-a-c-array).
+
+4.  **Integrate the TensorFlow Lite for Microcontrollers C++ library**
+
+    Write your microcontroller code to collect data, perform inference using the
+    [C++ library](library.md), and make use of the results.
+
+5.  **Deploy to your device**
+
+    Build and deploy the program to your device.
+
+## Limitations
+
+TensorFlow Lite for Microcontrollers is designed for the specific constraints of
+microcontroller development. If you are working on more powerful devices (for
+example, an embedded Linux device like the Raspberry Pi), the standard
+TensorFlow Lite framework might be easier to integrate.
+
+The following limitations should be considered:
+
+*   Support for a [limited subset](build_convert.md#operation-support) of
+    TensorFlow operations
+*   Support for a limited set of devices
+*   Low-level C++ API requiring manual memory management
+*   Training is not supported
+
+## Next steps
+
+Read [Get started with microcontrollers](get_started.md) to try the example
+applications and learn how to use the API.
diff --git a/tensorflow/lite/g3doc/microcontrollers/library.md b/tensorflow/lite/g3doc/microcontrollers/library.md
index 6dc7261..17b7b69 100644
--- a/tensorflow/lite/g3doc/microcontrollers/library.md
+++ b/tensorflow/lite/g3doc/microcontrollers/library.md
@@ -5,13 +5,8 @@
 It is designed to be readable, easy to modify, well-tested, easy to integrate,
 and compatible with regular TensorFlow Lite.
 
-The following document will outline the basic structure of the C++ library,
-provide the commands required for compilation, and give an overview of how to
-port to new devices.
-
-The
-[README.md](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#how-to-port-tensorflow-lite-micro-to-a-new-platform)
-contains more in-depth information on all of these topics.
+The following document outlines the basic structure of the C++ library and
+provides information about creating your own project.
 
 ## File structure
 
@@ -20,8 +15,7 @@
 root directory has a relatively simple structure. However, since it is located
 inside of the extensive TensorFlow repository, we have created scripts and
 pre-generated project files that provide the relevant source files in isolation
-within various embedded development environments such as Arduino, Keil, Make,
-and Mbed.
+within various embedded development environments.
 
 ### Key files
 
@@ -29,7 +23,13 @@
 interpreter are located in the root of the project, accompanied by tests:
 
 -   [`all_ops_resolver.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h)
-    provides the operations used by the interpreter to run the model.
+    or
+    [`micro_mutable_op_resolver.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h)
+    can be used to provide the operations used by the interpreter to run the
+    model. Since `all_ops_resolver.h` pulls in every available operation, it
+    uses a lot of memory. In production applications, you should use
+    `micro_mutable_op_resolver.h` to pull in only the operations your model
+    needs.
 -   [`micro_error_reporter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/micro_error_reporter.h)
     outputs debug information.
 -   [`micro_interpreter.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/micro_interpreter.h)
@@ -51,17 +51,32 @@
 -   [`examples`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples),
     which contains sample code.
 
-### Generate project files
+## Start a new project
 
-The project's `Makefile` is able to generate standalone projects containing all
-necessary source files that can be imported into embedded development
-environments. The current supported environments are Arduino, Keil, Make, and
-Mbed.
+We recommend using the *Hello World* example as a template for new projects. You
+can obtain a version of it for your platform of choice by following the
+instructions in this section.
 
-Note: We host prebuilt projects for some of these environments. See
-[Supported platforms](overview.md#supported-platforms) to download.
+### Use the Arduino library
 
-To generate these projects with Make, use the following command:
+If you are using Arduino, the *Hello World* example is included in the
+`Arduino_TensorFlowLite` Arduino library, which you can download from the
+Arduino IDE and in [Arduino Create](https://create.arduino.cc/).
+
+Once the library has been added, go to `File -> Examples`. You should see an
+example near the bottom of the list named `TensorFlowLite:hello_world`. Select
+it and click `hello_world` to load the example. You can then save a copy of the
+example and use it as the basis of your own project.
+
+### Generate projects for other platforms
+
+TensorFlow Lite for Microcontrollers is able to generate standalone projects
+that contain all of the necessary source files, using a `Makefile`. The current
+supported environments are Keil, Make, and Mbed.
+
+To generate these projects with Make, clone the
+[TensorFlow repository](http://github.com/tensorflow/tensorflow) and run the
+following command:
 
 ```bash
 make -f tensorflow/lite/experimental/micro/tools/make/Makefile generate_projects
@@ -72,39 +87,104 @@
 inside a path like
 `tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/` (the exact
 path depends on your host operating system). These folders contain the generated
-project and source files. For example,
-`tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/keil`
-contains the Keil uVision targets.
+project and source files.
 
-## Build the library
+After running the command, you'll be able to find the *Hello World* projects in
+`tensorflow/lite/experimental/micro/tools/make/gen/linux_x86_64/prj/hello_world`.
+For example, `hello_world/keil` will contain the Keil project.
 
-If you are using a generated project, see its included README for build
-instructions.
+## Run the tests
 
-To build the library and run tests from the main TensorFlow repository, run the
-following commands:
+To build the library and run all of its unit tests, use the following command:
 
-1.  Clone the TensorFlow repository from GitHub to a convenient place.
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test
+```
 
-    ```bash
-    git clone --depth 1 https://github.com/tensorflow/tensorflow.git
-    ```
+To run an individual test, use the following command, replacing `<test_name>`
+with the name of the test:
 
-1.  Enter the directory that was created in the previous step.
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile test_<test_name>
+```
 
-    ```bash
-    cd tensorflow
-    ```
+You can find the test names in the project's Makefiles. For example,
+`examples/hello_world/Makefile.inc` specifies the test names for the *Hello
+World* example.
 
-1.  Invoke the `Makefile` to build the project and run tests. Note that this
-    will download all required dependencies:
+## Build binaries
 
-    ```bash
-    make -f tensorflow/lite/experimental/micro/tools/make/Makefile test
-    ```
+To build a runnable binary for a given project (such as an example application),
+use the following command, replacing `<project_name>` with the project you wish
+to build:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile <project_name>_bin
+```
+
+For example, the following command will build a binary for the *Hello World*
+application:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile hello_world_bin
+```
+
+By default, the project will be compiled for the host operating system. To
+specify a different target architecture, use `TARGET=`. The following example
+shows how to build the *Hello World* example for the SparkFun Edge:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=sparkfun_edge hello_world_bin
+```
+
+When a target is specified, any available target-specific source files will be
+used in place of the original code. For example, the subdirectory
+`examples/hello_world/sparkfun_edge` contains SparkFun Edge implementations of
+the files `constants.cc` and `output_handler.cc`, which will be used when the
+target `sparkfun_edge` is specified.
+
+You can find the project names in the project's Makefiles. For example,
+`examples/hello_world/Makefile.inc` specifies the binary names for the *Hello
+World* example.
+
+## Optimized kernels
+
+The reference kernels in the root of
+`tensorflow/lite/experimental/micro/kernels` are implemented in pure C/C++, and
+do not include platform-specific hardware optimizations.
+
+Optimized versions of kernels are provided in subdirectories. For example,
+`kernels/cmsis-nn` contains several optimized kernels that make use of Arm's
+CMSIS-NN library.
+
+To generate projects using optimized kernels, use the following command,
+replacing `<subdirectory_name>` with the name of the subdirectory containing the
+optimizations:
+
+```bash
+make -f tensorflow/lite/experimental/micro/tools/make/Makefile TAGS=<subdirectory_name> generate_projects
+```
+
+You can add your own optimizations by creating a new subfolder for them. We
+encourage pull requests for new optimized implementations.
+
+## Generate the Arduino library
+
+A nightly build of the Arduino library is available via the Arduino IDE's
+library manager.
+
+If you need to generate a new build of the library, you can run the following
+script from the TensorFlow repository:
+
+```bash
+./tensorflow/lite/experimental/micro/tools/ci_build/test_arduino.sh
+```
+
+The resulting library can be found in
+`tensorflow/lite/experimental/micro/tools/make/gen/arduino_x86_64/prj/tensorflow_lite.zip`.
 
 ## Port to new devices
 
 Guidance on porting TensorFlow Lite for Microcontrollers to new platforms and
 devices can be found in
-[README.md](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro#how-to-port-tensorflow-lite-micro-to-a-new-platform).
+[`micro/README.md`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/README.md).
diff --git a/tensorflow/lite/g3doc/microcontrollers/overview.md b/tensorflow/lite/g3doc/microcontrollers/overview.md
deleted file mode 100644
index b9a16bd..0000000
--- a/tensorflow/lite/g3doc/microcontrollers/overview.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# TensorFlow Lite for Microcontrollers
-
-TensorFlow Lite for Microcontrollers is an experimental port of TensorFlow Lite
-aimed at microcontrollers and other devices with only kilobytes of memory.
-
-It is designed to be portable even to "bare metal" systems, so it doesn't
-require operating system support, any standard C or C++ libraries, or dynamic
-memory allocation. The core runtime fits in 16KB on a Cortex M3, and with enough
-operators to run a speech keyword detection model, takes up a total of 22KB.
-
-## Get started
-
-To quickly get up and running with TensorFlow Lite for Microcontrollers, read
-[Get started with microcontrollers](get_started.md).
-
-## Why microcontrollers are important
-
-Microcontrollers are typically small, low-powered computing devices that are
-often embedded within hardware that requires basic computation, including
-household appliances and Internet of Things devices. Billions of
-microcontrollers are manufactured each year.
-
-Microcontrollers are often optimized for low energy consumption and small size,
-at the cost of reduced processing power, memory, and storage. Some
-microcontrollers have features designed to optimize performance on machine
-learning tasks.
-
-By running machine learning inference on microcontrollers, developers can add AI
-to a vast range of hardware devices without relying on network connectivity,
-which is often subject to bandwidth and power constraints and results in high
-latency. Running inference on-device can also help preserve privacy, since no
-data has to leave the device.
-
-## Features and components
-
-*   C++ API, with runtime that fits in 16KB on a Cortex M3
-*   Uses standard TensorFlow Lite
-    [FlatBuffer](https://google.github.io/flatbuffers/) schema
-*   Pre-generated project files for popular embedded development platforms, such
-    as Arduino, Keil, and Mbed
-*   Optimizations for several embedded platforms
-*   [Sample code](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech)
-    demonstrating spoken hotword detection
-
-## Developer workflow
-
-This is the process for deploying a TensorFlow model to a microcontroller:
-
-1.  **Create or obtain a TensorFlow model**
-
-    The model must be small enough to fit on your target device after
-    conversion, and it can only use
-    [supported operations](build_convert.md#operation-support). If you want to
-    use operations that are not currently supported, you can provide your own
-    implementations.
-
-2.  **Convert the model to a TensorFlow Lite FlatBuffer**
-
-    You will convert your model into the standard TensorFlow Lite format using
-    the [TensorFlow Lite converter](build_convert.md#model-conversion). You may
-    wish to output a quantized model, since these are smaller in size and more
-    efficient to execute.
-
-3.  **Convert the FlatBuffer to a C byte array**
-
-    Models are kept in read-only program memory and provided in the form of a
-    simple C file. Standard tools can be used to
-    [convert the FlatBuffer into a C array](build_convert.md#convert-to-a-c-array).
-
-4.  **Integrate the TensorFlow Lite for Microcontrollers C++ library**
-
-    Write your microcontroller code to perform inference using the
-    [C++ library](library.md).
-
-5.  **Deploy to your device**
-
-    Build and deploy the program to your device.
-
-## Supported platforms
-
-One of the challenges of embedded software development is that there are a lot
-of different architectures, devices, operating systems, and build systems. We
-aim to support as many of the popular combinations as we can, and make it as
-easy as possible to add support for others.
-
-If you're a product developer, we have build instructions or pre-generated
-project files that you can download for the following platforms:
-
-Device                                                                                         | Mbed                                                                           | Keil                                                                           | Make/GCC
----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ | --------
-[STM32F746G Discovery Board](https://www.st.com/en/evaluation-tools/32f746gdiscovery.html)     | [Download](https://drive.google.com/open?id=1OtgVkytQBrEYIpJPsE8F6GUKHPBS3Xeb) | -                                                                              | [Download](https://drive.google.com/open?id=1u46mTtAMZ7Y1aD-He1u3R8AE4ZyEpnOl)
-["Blue Pill" STM32F103-compatible development board](https://github.com/google/stm32_bare_lib) | -                                                                              | -                                                                              | [Instructions](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#building-for-the-blue-pill-stm32f103-using-make)
-[Ambiq Micro Apollo3Blue EVB using Make](https://ambiqmicro.com/apollo-ultra-low-power-mcus/)  | -                                                                              | -                                                                              | [Instructions](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#building-for-ambiq-micro-apollo3blue-evb-using-make)
-[Generic Keil uVision Projects](http://www2.keil.com/mdk5/uvision/)                            | -                                                                              | [Download](https://drive.google.com/open?id=1Lw9rsdquNKObozClLPoE5CTJLuhfh5mV) | -
-[Eta Compute ECM3531 EVB](https://etacompute.com/)                                             | -                                                                              | -                                                                              | [Instructions](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#Building-for-the-Eta-Compute-ECM3531-EVB-using-Make)
-
-If your device is not yet supported, it may not be difficult add support. You
-can learn about that process in
-[README.md](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/README.md#how-to-port-tensorflow-lite-micro-to-a-new-platform).
-
-### Portable reference code
-
-If you don't have a particular microcontroller platform in mind yet, or just
-want to try out the code before beginning porting, the easiest way to begin is
-by
-[downloading the platform-agnostic reference code](https://drive.google.com/open?id=1cawEQAkqquK_SO4crReDYqf_v7yAwOY8).
-
-There is a series of folders inside the archive, with each one containing just
-the source files you need to build one binary. There is a simple Makefile for
-each folder, but you should be able to load the files into almost any IDE and
-build them. There is also a [Visual Studio Code](https://code.visualstudio.com/)
-project file already set up, so you can easily explore the code in a
-cross-platform IDE.
-
-## Goals
-
-Our design goals are to make the framework readable, easy to modify,
-well-tested, easy to integrate, and fully compatible with TensorFlow Lite via a
-consistent file schema, interpreter, API, and kernel interface.
-
-You can read more about the design in
-[goals and tradeoffs](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro#goals).
-
-## Limitations
-
-TensorFlow Lite for Microcontrollers is designed for the specific constraints of
-microcontroller development. If you are working on more powerful devices (for
-example, an embedded Linux device like the Raspberry Pi), the standard
-TensorFlow Lite framework might be easier to integrate.
-
-The following limitations should be considered:
-
-*   Support for a [limited subset](build_convert.md#operation-support) of
-    TensorFlow operations
-*   Support for a limited set of devices
-*   Low-level C++ API requiring manual memory management