tree: fc9e6980951344264d8ff613a78eaed0b4fdf8df [path history] [tgz]
  1. g3doc/
  2. graph_transformations/
  3. python/
  4. runtime/
  5. tensorflow_graph_matching/
  6. tflite/
  7. allocate_transient_arrays.cc
  8. allocate_transient_arrays.h
  9. args.h
  10. BUILD
  11. dump_graphviz.cc
  12. dump_graphviz.h
  13. export_tensorflow.cc
  14. export_tensorflow.h
  15. format_port.h
  16. import_tensorflow.cc
  17. import_tensorflow.h
  18. model.h
  19. model_cmdline_flags.cc
  20. model_cmdline_flags.h
  21. model_flags.proto
  22. README.md
  23. tensorflow_util.cc
  24. tensorflow_util.h
  25. toco.cc
  26. toco_cmdline_flags.cc
  27. toco_cmdline_flags.h
  28. toco_flags.proto
  29. toco_graphviz_dump_options.cc
  30. toco_graphviz_dump_options.h
  31. toco_port.cc
  32. toco_port.h
  33. toco_port_test.cc
  34. toco_tooling.cc
  35. toco_tooling.h
  36. toco_types.h
  37. tooling_util.cc
  38. tooling_util.h
  39. tooling_util_test.cc
  40. types.proto
tensorflow/contrib/lite/toco/README.md

The TensorFlow Lite Optimizing Converter

The TensorFlow Lite Optimizing Converter's most typical use is converting from the TensorFlow GraphDef to the TensorFlow Lite format, but it supports much more than that.

Usage documentation

Usage information is given in these documents:

Design documentation

Coming soon!

Where the converter fits in the TensorFlow landscape

In the typical case, an application developer is using TensorFlow to design and train models, then uses TensorFlow's freeze_graph.py to generate a frozen inference graph, then uses the converter to convert that into a TensorFlow Lite flatbuffer file, then ships that file to client devices where the TensorFlow Lite interpreter handles them on-device. This is represented in the following diagram:

drawing