Update the documents for experimental converter
PiperOrigin-RevId: 296519090
Change-Id: I054b986e9aefd9d0cfa93e9a15d47dc3024dade8
diff --git a/tensorflow/lite/g3doc/convert/cmdline.md b/tensorflow/lite/g3doc/convert/cmdline.md
index 2d89c04..a6594d4 100644
--- a/tensorflow/lite/g3doc/convert/cmdline.md
+++ b/tensorflow/lite/g3doc/convert/cmdline.md
@@ -70,3 +70,47 @@
--saved_model_dir=/tmp/mobilenet_saved_model \
--output_file=/tmp/mobilenet.tflite
```
+
+### Custom ops in the new converter
+
+There is a behavior change in how models containing
+[custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which
+users use to set --allow\_custom\_ops before) are handled in the
+[new converter](https://github.com/tensorflow/tensorflow/blob/917ebfe5fc1dfacf8eedcc746b7989bafc9588ef/tensorflow/lite/python/lite.py#L81).
+
+**Built-in TensorFlow op**
+
+If you are converting a model with a built-in TensorFlow op that does not exist
+in TensorFlow Lite, you should set --allow\_custom\_ops argument (same as
+before), explained [here](https://www.tensorflow.org/lite/guide/ops_custom).
+
+**Custom op in TensorFlow**
+
+If you are converting a model with a custom TensorFlow op, it is recommended
+that you write a [TensorFlow kernel](https://www.tensorflow.org/guide/create_op)
+and [TensorFlow Lite kernel](https://www.tensorflow.org/lite/guide/ops_custom).
+This ensures that the model is working end-to-end, from TensorFlow and
+TensorFlow Lite. This also requires setting the --allow\_custom\_ops argument.
+
+**Advanced custom op usage (not recommended)**
+
+If the above is not possible, you can still convert a TensorFlow model
+containing a custom op without a corresponding kernel. You will need to pass the
+[OpDef](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/op_def.proto)
+of the custom op in TensorFlow using --custom\_opdefs flag, as long as you have
+the corresponding OpDef registered in the TensorFlow global op registry. This
+ensures that the TensorFlow model is valid (i.e. loadable by the TensorFlow
+runtime).
+
+If the custom op is not part of the global TensorFlow op registry, then the
+corresponding OpDef needs to be specified via the --custom\_opdefs flag. This is
+a list of an OpDef proto in string that needs to be additionally registered.
+Below is an example of an TFLiteAwesomeCustomOp with 2 inputs, 1 output, and 2
+attributes:
+
+```
+--custom\_opdefs="name: 'TFLiteAwesomeCustomOp' input\_arg: { name: 'InputA'
+type: DT\_FLOAT } input\_arg: { name: ‘InputB' type: DT\_FLOAT }
+output\_arg: { name: 'Output' type: DT\_FLOAT } attr : { name: 'Attr1' type:
+'float'} attr : { name: 'Attr2' type: 'list(float)'}"
+```
diff --git a/tensorflow/lite/g3doc/convert/python_api.md b/tensorflow/lite/g3doc/convert/python_api.md
index f9f79fb..4c22d6a 100644
--- a/tensorflow/lite/g3doc/convert/python_api.md
+++ b/tensorflow/lite/g3doc/convert/python_api.md
@@ -180,3 +180,47 @@
[pip](https://www.tensorflow.org/install/pip) (recommended) or
[Docker](https://www.tensorflow.org/install/docker), or
[build the pip package from source](https://www.tensorflow.org/install/source).
+
+### Custom ops in the experimenal new converter
+
+There is a behavior change in how models containing
+[custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which
+users use to set allow\_custom\_ops before) are handled in the
+[new converter](https://github.com/tensorflow/tensorflow/blob/917ebfe5fc1dfacf8eedcc746b7989bafc9588ef/tensorflow/lite/python/lite.py#L81).
+
+**Built-in TensorFlow op**
+
+If you are converting a model with a built-in TensorFlow op that does not exist
+in TensorFlow Lite, you should set allow\_custom\_ops attribute (same as
+before), explained [here](https://www.tensorflow.org/lite/guide/ops_custom).
+
+**Custom op in TensorFlow**
+
+If you are converting a model with a custom TensorFlow op, it is recommended
+that you write a [TensorFlow kernel](https://www.tensorflow.org/guide/create_op)
+and [TensorFlow Lite kernel](https://www.tensorflow.org/lite/guide/ops_custom).
+This ensures that the model is working end-to-end, from TensorFlow and
+TensorFlow Lite. This also requires setting the allow\_custom\_ops attribute.
+
+**Advanced custom op usage (not recommended)**
+
+If the above is not possible, you can still convert a TensorFlow model
+containing a custom op without a corresponding kernel. You will need to pass the
+[OpDef](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/op_def.proto)
+of the custom op in TensorFlow using --custom\_opdefs flag, as long as you have
+the corresponding OpDef registered in the TensorFlow global op registry. This
+ensures that the TensorFlow model is valid (i.e. loadable by the TensorFlow
+runtime).
+
+If the custom op is not part of the global TensorFlow op registry, then the
+corresponding OpDef needs to be specified via the --custom\_opdefs flag. This is
+a list of an OpDef proto in string that needs to be additionally registered.
+Below is an example of an TFLiteAwesomeCustomOp with 2 inputs, 1 output, and 2
+attributes:
+
+```
+converter.custom\_opdefs="name: 'TFLiteAwesomeCustomOp' input\_arg: { name: 'InputA'
+type: DT\_FLOAT } input\_arg: { name: ‘InputB' type: DT\_FLOAT }
+output\_arg: { name: 'Output' type: DT\_FLOAT } attr : { name: 'Attr1' type:
+'float'} attr : { name: 'Attr2' type: 'list(float)'}"
+```