Unify %define-lines and %section directives

Also adds %insert-indented for the specification file.

Inserting an indented section inside another section is convenient when
working with nested data structures, e.g.

    types.spec:
        %section OperandLifeTime
        enum OperandLifeTime { ... };
        %/section

        %section Operand
        struct Operand {
        %kind canonical
        %insert-indented 4 OperandLifeTime
        %/kind
            ...
        };
        %/section

    1.0/types.t:
        %insert OperandLifeTime
        %insert Operand

    Types.t (canonical):
        %insert Operand

After this change, the following constructs are allowed:
- section within conditional
- conditional within section

The following is still disallowed:
- section within conditional within section
- conditional within section within conditional

Bug: 160667417
Test: ./generate_api.sh
Change-Id: I6e7f548e583ae302477503764ac1efb63bb71f60
diff --git a/tools/api/README.md b/tools/api/README.md
index a1b31c1..94157c0 100644
--- a/tools/api/README.md
+++ b/tools/api/README.md
@@ -76,15 +76,14 @@
 
 Certain regions can enclose certain other regions, but this is very limited:
 
-* A conditional region can enclose a definition region.
-* A section region can enclose a conditional region or a definition region.
+* A conditional region can enclose a section region.
+* A section region can enclose a conditional region.
 
 Equivalently:
 
 * A conditional region can be enclosed by a section region.
-* A definition region can be enclosed by a conditional region or a section
-  region.
-  
+* A section region can be enclosed by a conditional region.
+
 #### null region
 
 A *null region* is a sequence of lines that is not part of any other region.
@@ -106,23 +105,15 @@
 directive are ignored *except* that even ignored directives undergo some level
 of syntactic and semantic checking.
 
-#### definition region
-
-A *definition region* is a sequence of lines immediately preceded by the
-`%define-lines *name*` directive and immediately followed by the
-`%/define-lines` directive.  Every non-comment line in the sequence undergoes
-macro substitution, and the resulting lines are associated with the region name.
-They can later be added to a section region with the `%insert-lines` directive.
-
-This can be thought of as a multi-line macro facility.
-
 #### section region
 
 A *section region* is a sequence of lines immediately preceded by the `%section
 *name*` directive and immediately followed by the `%/section` directive.  Every
-non-comment line in the sequence undergoes macro substitution, and the resulting
-lines are associated with the section name.  They can be inserted into the
-generated output file as directed by the template file's `%insert` and
+line in the sequence that doesn't begin with `%` undergoes macro substitution,
+and the resulting lines are associated with the section name.  They can be
+inserted into the generated output file as directed by the template file's
+`%insert` and `%insert-indented` directives.  They can be added to another
+section region with the with the specification file's `%insert` and
 `%insert-indented` directives.
 
 This is the mechanism by which a specification file contributes text to the
@@ -138,10 +129,10 @@
 
   %define test  this body begins and ends with a space character 
 
-Macro substitution occurs within a definition region or a section region: a
-substring `%{*name*}` is replaced with the corresponding *body*.  Macro
-substitution is *not* recursive: A substring `%{*name2*}` in *body* will not
-undergo macro substitution, except as discussed for *macro arguments* below.
+Macro substitution occurs within a section region: a substring `%{*name*}` is
+replaced with the corresponding *body*.  Macro substitution is *not* recursive:
+A substring `%{*name2*}` in *body* will not undergo macro substitution, except
+as discussed for *macro arguments* below.
 
 Permitted in regions: null, conditional, section
 
@@ -152,31 +143,37 @@
 substring of the form `%{argnum}` will be replaced by the corresponding argument
 from *arglist*.  For example, if the definition is
 
-  %define test second is %{2}, first is %{1}
-  
+```
+%define test second is %{2}, first is %{1}
+```
+
 then the macro invocation
 
-  %{test alpha beta}
-  
+```
+%{test alpha beta}
+```
+
 is expanded to
 
-  second is beta, first is alpha
+```
+second is beta, first is alpha
+```
 
 The only check on the number of arguments supplied at macro invocation time is
 that there must be at least as many arguments as the highest `%{argnum}`
 reference in the macro body.  In the example above, `%{test alpha}` would be an
 error, but `%{test alpha beta gamma}` would not.
 
-#### `%define-lines *name*`, `%/define-lines`
+#### `%insert *name*`
 
-`%define-lines *name*` creates a *definition region* terminated by
-`%/define-lines`.
+Adds all lines from the named section region to the current section region.
 
-Permitted in regions: null, conditional, section
+Permitted in regions: section
 
-#### `%insert-lines *name*`
+#### `%insert-indented *count* *name*`
 
-Adds all lines from the named definition region to the current section region.
+Similar to `%insert *name*`, but each non-empty added line is prefixed
+with *count* space characters.  *count* must be a non-negative integer.
 
 Permitted in regions: section
 
@@ -218,4 +215,4 @@
 
 `%section *name*` creates a *section region* terminated by `%/section`.
 
-Permitted in regions: null
+Permitted in regions: null, conditional
diff --git a/tools/api/generate_api.py b/tools/api/generate_api.py
index e04abbb..21fe25f 100755
--- a/tools/api/generate_api.py
+++ b/tools/api/generate_api.py
@@ -48,18 +48,16 @@
     super(Specification, self).__init__(filename)
     self.sections = dict() # key is section name, value is array of strings (lines) in the section
     self.section = None # name of current %section
+    self.section_start = None # first line number of current %section
     self.defmacro = dict() # key is macro name, value is string (body of macro)
-    self.deflines = dict() # key is definition name, value is array of strings (lines) in the definition
-    self.deflines_key = None # name of current %define-lines
     self.kind = kind
     self.kinds = None # remember %define-kinds
     self.conditional = self.UNCONDITIONAL
+    self.conditional_start = None # first line number of current %kind
 
   def finish(self):
     assert self.section is None, "\"%section " + self.section + \
       "\" not terminated by end of specification file"
-    assert self.deflines_key is None, "\"%define-lines " + self.deflines_key + \
-      "\" not terminated by end of specification file"
     assert self.conditional is self.UNCONDITIONAL, "%kind not terminated by end of specification file"
 
   def macro_substitution(self):
@@ -137,9 +135,10 @@
         definition, etc.
     """
 
-    DIRECTIVES = ["%define", "%define-kinds", "%define-lines", "%/define-lines",
-                  "%else", "%insert-lines", "%kind", "%/kind", "%section",
-                  "%/section"]
+    DIRECTIVES = [
+        "%define", "%define-kinds", "%else", "%insert", "%insert-indented",
+        "%kind", "%/kind", "%section", "%/section"
+    ]
 
     # Common typos: /%directive, \%directive
     matchbad = re.search("^[/\\\]%(\S*)", self.line)
@@ -158,49 +157,32 @@
       if not directive in DIRECTIVES:
         assert False, "Unknown directive \"" + directive + "\" on " + self.context()
 
-      # Check for end of multiline macro
-      match = re.search("^%/define-lines\s*(\S*)", self.line)
-      if match:
-        assert match[1] == "", "Malformed directive \"%/define-lines\" on " + self.context()
-        assert not self.deflines_key is None, "%/define-lines with no matching %define-lines on " + \
-          self.context()
-        self.deflines_key = None
-        return
-
-      # Directives are forbidden within multiline macros
-      assert self.deflines_key is None, "Directive is not permitted in definition of \"" + \
-        self.deflines_key + "\" at " + self.context()
-
-      # Check for define (multi line)
-      match = re.search("^%define-lines\s+(\S+)\s*$", self.line)
-      if match:
-        key = match[1]
-        if self.conditional is self.CONDITIONAL_OFF:
-          self.deflines_key = ""
-          return
-        assert not key in self.deflines, "Duplicate definition of \"" + key + "\" on " + self.context()
-        self.deflines[key] = []
-        self.deflines_key = key
-        # Non-directive lines will be added to self.deflines[key] as they are read
-        # until we see %/define-lines
-        return
-
       # Check for insert
-      match = re.search("^%insert-lines\s+(\S+)\s*$", self.line)
+      match = re.search("^%insert(?:-indented\s+(\S+))?\s+(\S+)\s*$", self.line)
       if match:
-        assert not self.section is None, "%insert-lines outside %section at " + self.context()
-        key = match[1]
-        assert key in self.deflines, "Missing definition of lines \"" + key + "\" at " + self.context()
+        directive = self.line.split(" ", 1)[0]
+        assert not self.section is None, directive + " outside %section at " + self.context()
+        count = match[1] or "0"
+        key = match[2]
+        assert re.match("^\d+$", count), "Bad count \"" + count + "\" on " + self.context()
+        assert key in self.sections, "Unknown section \"" + key + "\" on " + self.context()
+        assert key != self.section, "Cannot insert section \"" + key + "\" into itself on " + self.context()
         if self.conditional is self.CONDITIONAL_OFF:
           return
-        self.sections[self.section].extend(self.deflines[key]);
+        indent = " " * int(count)
+        self.sections[self.section].extend(
+            (indent + line if line.rstrip("\n") else line)
+            for line in self.sections[key])
         return
 
       # Check for start of section
       match = re.search("^%section\s+(\S+)\s*$", self.line)
       if match:
         assert self.section is None, "Nested %section is forbidden at " + self.context()
-        assert self.conditional is self.UNCONDITIONAL, "%section within %kind is forbidden at " + self.context()
+        self.section_start = self.lineno
+        if self.conditional is self.CONDITIONAL_OFF:
+          self.section = ""
+          return
         key = match[1]
         assert not key in self.sections, "Duplicate definition of \"" + key + "\" on " + self.context()
         self.sections[key] = []
@@ -212,24 +194,30 @@
       # Check for end of section
       if re.search("^%/section\s*$", self.line):
         assert not self.section is None, "%/section with no matching %section on " + self.context()
-        assert self.conditional is self.UNCONDITIONAL # can't actually happen
+        assert self.conditional_start is None or self.conditional_start < self.section_start, \
+            "%kind not terminated by end of %section on " + self.context()
         self.section = None
+        self.section_start = None
         return
 
       # Check for start of kind
       match = re.search("^%kind\s+((\S+)(\s+\S+)*)\s*$", self.line)
       if match:
-        assert self.conditional is self.UNCONDITIONAL, "%kind is nested at " + self.context()
+        assert self.conditional is self.UNCONDITIONAL, \
+            "Nested %kind is forbidden at " + self.context()
         patterns = match[1]
         if self.match_kind(patterns):
           self.conditional = self.CONDITIONAL_ON
         else:
           self.conditional = self.CONDITIONAL_OFF
+        self.conditional_start = self.lineno
         return
 
       # Check for complement of kind (else)
       if re.search("^%else\s*$", self.line):
         assert not self.conditional is self.UNCONDITIONAL, "%else without matching %kind on " + self.context()
+        assert self.section_start is None or self.section_start < self.conditional_start, \
+            "%section not terminated by %else on " + self.context()
         if self.conditional == self.CONDITIONAL_ON:
           self.conditional = self.CONDITIONAL_OFF
         else:
@@ -256,7 +244,10 @@
       # Check for end of kind
       if re.search("^%/kind\s*$", self.line):
         assert not self.conditional is self.UNCONDITIONAL, "%/kind without matching %kind on " + self.context()
+        assert self.section_start is None or self.section_start < self.conditional_start, \
+            "%section not terminated by end of %kind on " + self.context()
         self.conditional = self.UNCONDITIONAL
+        self.conditional_start = None
         return
 
       # Check for kinds definition
@@ -290,8 +281,6 @@
 
     if self.conditional is self.CONDITIONAL_OFF:
       pass
-    elif not self.deflines_key is None:
-      self.deflines[self.deflines_key].append(self.macro_substitution())
     elif self.section is None:
       # Treat as comment
       pass
@@ -323,7 +312,7 @@
       if match:
         count = match[1] or "0"
         key = match[2]
-        assert re.match("\d+", count), "Bad count \"" + count + "\" on " + self.context()
+        assert re.match("^\d+$", count), "Bad count \"" + count + "\" on " + self.context()
         assert key in specification.sections, "Unknown section \"" + key + "\" on " + self.context()
         indent = " " * int(count)
         for line in specification.sections[key]:
@@ -362,7 +351,6 @@
   specification.read()
   if (args.verbose):
     print(specification.defmacro)
-    print(specification.deflines)
 
   # Read the template
   template = Template(args.template, specification)
@@ -375,6 +363,5 @@
 # TODO: Write test cases for malformed specification and template files
 # TODO: Find a cleaner way to handle conditionals (%kind) or nesting in general;
 #       maybe add support for more nesting
-# TODO: Unify section/define-lines, rather than having two kinds of text regions?
-#       Could we take this further and do away with the distinction between a
-#       specification file and a template file, and add a %include directive?
+# TODO: Could we do away with the distinction between a specification file and a
+#       template file and add a %include directive?
diff --git a/tools/api/types.spec b/tools/api/types.spec
index 8c93dc3..03409ec 100644
--- a/tools/api/types.spec
+++ b/tools/api/types.spec
@@ -21,35 +21,35 @@
 %define or_1.2 or {@link ANEURALNETWORKS_%{1}}
 %define NDK_if_specified  (if specified)
 %define otherOperandParameters other operand parameters
-%define-lines AVAIL1
+%section AVAIL1
      *
      * Available since NNAPI feature level 1.
-%/define-lines
-%define-lines AVAIL1Short
+%/section
+%section AVAIL1Short
  *
  * Available since NNAPI feature level 1.
-%/define-lines
-%define-lines AVAIL2
+%/section
+%section AVAIL2
      *
      * Available since NNAPI feature level 2.
-%/define-lines
-%define-lines AVAIL3
+%/section
+%section AVAIL3
      *
      * Available since NNAPI feature level 3.
-%/define-lines
-%define-lines AVAIL4
+%/section
+%section AVAIL4
      *
      * Available since NNAPI feature level 4.
-%/define-lines
-%define-lines OutputState
+%/section
+%section OutputState
      *
      * Important: As of NNAPI feature level 3, there is no way to get the output state tensors out
      * and NNAPI does not maintain internal states. This operator does not support the usage pattern
      * in which multiple cells are chained and state tensors are propagated.
-%/define-lines
-%define-lines PaddingCodeValues
+%/section
+%section PaddingCodeValues
      *      {@link PaddingCode} values.
-%/define-lines
+%/section
 %/kind
 
 %kind hal*
@@ -66,35 +66,35 @@
 %define NNAPILevel4 HAL version 1.3
 %define NDK_if_specified
 %define otherOperandParameters extraParams
-%define-lines AVAIL1
-%/define-lines
-%define-lines AVAIL1Short
-%/define-lines
-%define-lines AVAIL2
-%/define-lines
-%define-lines AVAIL3
-%/define-lines
-%define-lines AVAIL4
-%/define-lines
-%define-lines PaddingCodeValues
+%section AVAIL1
+%/section
+%section AVAIL1Short
+%/section
+%section AVAIL2
+%/section
+%section AVAIL3
+%/section
+%section AVAIL4
+%/section
+%section PaddingCodeValues
      *      following values: {0 (NONE), 1 (SAME), 2 (VALID)}.
-%/define-lines
-%define-lines OutputState
-%/define-lines
+%/section
+%section OutputState
+%/section
 %/kind
 
 %kind hal_1.0 hal_1.1
 %define DeclareOperation %{1} = %{2}
 %define BeforeNNAPILevel3For For
 %define or_1.2
-%define-lines NHWC_NCHW
+%section NHWC_NCHW
      * Supported tensor rank: 4, with "NHWC" (i.e., Num_samples, Height, Width,
      * and Channels) data layout.
-%/define-lines
-%define-lines GenericZero
-%/define-lines
-%define-lines ZeroBatchesNNAPILevel3
-%/define-lines
+%/section
+%section GenericZero
+%/section
+%section ZeroBatchesNNAPILevel3
+%/section
 %define DeclareOperation_1.2 @@@NOT_DEFINED@@@
 %define DeclareOperation_1.3 @@@NOT_DEFINED@@@
 %/kind
@@ -117,22 +117,22 @@
 %/kind
 
 %kind ndk hal_1.2 hal_1.3
-%define-lines NHWC_NCHW
+%section NHWC_NCHW
      * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout.
      * With the default data layout NHWC, the data is stored in the order of:
      * [batch, height, width, channels]. Alternatively, the data layout could
      * be NCHW, the data storage order of: [batch, channels, height, width].
      * NCHW is supported since %{NNAPILevel3}.
-%/define-lines
-%define-lines GenericZero
+%/section
+%section GenericZero
      * Since %{NNAPILevel3}, generic zero-sized input tensor is supported. Zero
      * dimension is only compatible with 0 or 1. The size of the output
      * dimension is zero if either of corresponding input dimension is zero.
      *
-%/define-lines
-%define-lines ZeroBatchesNNAPILevel3
+%/section
+%section ZeroBatchesNNAPILevel3
      *      Since %{NNAPILevel3}, zero batches is supported for this tensor.
-%/define-lines
+%/section
 %/kind
 
 %kind ndk hal_1.3
@@ -186,7 +186,7 @@
  * types. Most used are {@link %{OperandTypeLinkPfx}TENSOR_FLOAT32},
  * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM},
  * and {@link %{OperandTypeLinkPfx}INT32}.
-%insert-lines AVAIL1Short
+%insert AVAIL1Short
  */
 %/section
 
@@ -225,7 +225,7 @@
  * Operation types.
  *
  * The type of an operation in a model.
-%insert-lines AVAIL1Short
+%insert AVAIL1Short
  */
 %/section
 
@@ -251,7 +251,7 @@
      *     input2.dimension = {5, 4, 3, 1}
      *     output.dimension = {5, 4, 3, 2}
      *
-%insert-lines GenericZero
+%insert GenericZero
      * Supported tensor {@link %{OperandType}}:
 %kind ndk hal_1.2+
      * * {@link %{OperandTypeLinkPfx}TENSOR_FLOAT16} (since %{NNAPILevel3})
@@ -295,7 +295,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation ADD 0},
 
@@ -322,14 +322,14 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Both explicit padding and implicit padding are supported.
      *
      * Inputs (explicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
      *      the left, in the ‘width’ dimension.
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
@@ -358,10 +358,10 @@
      * Inputs (implicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -390,7 +390,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation AVERAGE_POOL_2D 1},
 
@@ -452,7 +452,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} tensor,
      *      the scale and zeroPoint values can be different from input tensors.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation CONCATENATION 2},
 
@@ -510,14 +510,14 @@
      * * * each value scaling is separate and equal to input.scale * filter.scales[channel]).
      *
 %/kind
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Both explicit padding and implicit padding are supported.
      *
      * Inputs (explicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth_in],
      *      specifying the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: A 4-D tensor, of shape
      *      [depth_out, filter_height, filter_width, depth_in], specifying the
      *      filter.
@@ -577,7 +577,7 @@
      * Inputs (implicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth_in],
      *      specifying the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: A 4-D tensor, of shape
      *      [depth_out, filter_height, filter_width, depth_in], specifying the
      *      filter.
@@ -606,7 +606,7 @@
 %/kind
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 4: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 5: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -636,7 +636,7 @@
      *      %{BeforeNNAPILevel3For} output tensor of
      *      {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM}, the following condition must
      *      be satisfied: output_scale > input_scale * filter_scale
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation CONV_2D 3},
 
@@ -698,7 +698,7 @@
      * * * each value scaling is separate and equal to input.scale * filter.scales[channel]).
      *
 %/kind
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Both explicit padding and implicit padding are supported.
      *
@@ -786,7 +786,7 @@
 %/kind
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 4: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 5: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -818,7 +818,7 @@
      *      output tensor of {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM},
      *      the following condition must be satisfied:
      *      output_scale > input_scale * filter_scale
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation DEPTHWISE_CONV_2D 4},
 
@@ -847,7 +847,7 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Inputs:
      * * 0: A 4-D tensor, of shape [batches, height, width, depth_in],
@@ -872,7 +872,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation DEPTH_TO_SPACE 5},
 
@@ -909,7 +909,7 @@
      *
      * Outputs:
      * * 0: A tensor with the same shape as input0.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation DEQUANTIZE 6},
 
@@ -965,7 +965,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint must be the same as input1.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation EMBEDDING_LOOKUP 7},
 
@@ -986,7 +986,7 @@
      * Outputs:
      * * 0: The output tensor, of the same {@link %{OperandType}} and dimensions as
      *      the input tensor.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation FLOOR 8},
 
@@ -1043,7 +1043,7 @@
      * * 0: The output tensor, of shape [batch_size, num_units]. %{BeforeNNAPILevel3For}
      *      output tensor of {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM}, the following
      *      condition must be satisfied: output_scale > input_scale * filter_scale.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation FULLY_CONNECTED 9},
 
@@ -1101,7 +1101,7 @@
      *      Stored as {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} with offset 0
      *      and scale 1.0f.
      *      A non-zero byte represents True, a hit. A zero indicates otherwise.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation HASHTABLE_LOOKUP 10},
 
@@ -1166,7 +1166,7 @@
      *      the result is undefined. Since %{NNAPILevel4}, if the elements along an axis
      *      are all zeros, the result is logical zero.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation L2_NORMALIZATION 11},
 
@@ -1188,14 +1188,14 @@
 %/kind
      * * {@link %{OperandTypeLinkPfx}TENSOR_FLOAT32}
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Both explicit padding and implicit padding are supported.
      *
      * Inputs (explicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
      *      the left, in the ‘width’ dimension.
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
@@ -1224,10 +1224,10 @@
      * Inputs (implicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -1248,7 +1248,7 @@
      * Outputs:
      * * 0: The output 4-D tensor, of shape
      *      [batches, out_height, out_width, depth].
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation L2_POOL_2D 12},
 
@@ -1320,7 +1320,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation LOCAL_RESPONSE_NORMALIZATION 13},
 
@@ -1357,7 +1357,7 @@
      *      For {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED},
      *      the scale must be 1.f / 256 and the zeroPoint must be -128.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation LOGISTIC 14},
 
@@ -1421,7 +1421,7 @@
      *      If the projection type is Dense:
      *      Output.Dim == { Tensor[0].Dim[0] * Tensor[0].Dim[1] }
      *      A flattened tensor that represents projected bit vectors.
-%insert-lines AVAIL1
+%insert AVAIL1
 %kind ndk hal_1.2+
      * The offset value for sparse projections was added in %{NNAPILevel3}.
 %/kind
@@ -1655,7 +1655,7 @@
      * * 3: The output (\f$o_t\f$).
      *      A 2-D tensor of shape [batch_size, output_size]. This is effectively
      *      the same as the current “output state (out)” value.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation LSTM 16},
 
@@ -1682,14 +1682,14 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Both explicit padding and implicit padding are supported.
      *
      * Inputs (explicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
      *      the left, in the ‘width’ dimension.
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the padding on
@@ -1718,10 +1718,10 @@
      * Inputs (implicit padding):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -1750,7 +1750,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation MAX_POOL_2D 17},
 
@@ -1769,7 +1769,7 @@
      * of the input operands. It starts with the trailing dimensions, and works
      * its way forward.
      *
-%insert-lines GenericZero
+%insert GenericZero
      * Supported tensor {@link %{OperandType}}:
 %kind ndk hal_1.2+
      * * {@link %{OperandTypeLinkPfx}TENSOR_FLOAT16} (since %{NNAPILevel3})
@@ -1807,7 +1807,7 @@
      *      the following condition must be satisfied:
      *      output_scale > input1_scale * input2_scale.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation MUL 18},
 
@@ -1846,7 +1846,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RELU 19},
 
@@ -1885,7 +1885,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RELU1 20},
 
@@ -1924,7 +1924,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RELU6 21},
 
@@ -1967,7 +1967,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RESHAPE 22},
 
@@ -1990,7 +1990,7 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
 %kind ndk hal_1.2+
      * Both resizing by shape and resizing by scale are supported.
@@ -1999,7 +1999,7 @@
      * Inputs (resizing by shape):
      * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
      *      the input.
-%insert-lines ZeroBatchesNNAPILevel3
+%insert ZeroBatchesNNAPILevel3
      * * 1: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the output
      *      width of the output tensor.
      * * 2: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the output
@@ -2068,7 +2068,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RESIZE_BILINEAR 23},
 
@@ -2123,7 +2123,7 @@
      * * 1: output.
      *      A 2-D tensor of shape [batch_size, num_units]. This is effectively
      *      the same as the current state value.
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation RNN 24},
 
@@ -2196,7 +2196,7 @@
      *      For {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED},
      *      the scale must be 1.f / 256 and the zeroPoint must be -128.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation SOFTMAX 25},
 
@@ -2224,7 +2224,7 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Inputs:
      * * 0: A 4-D tensor, of shape [batches, height, width, depth_in],
@@ -2249,7 +2249,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation SPACE_TO_DEPTH 26},
 
@@ -2329,7 +2329,7 @@
      * * 1: output.
      *      A 2-D tensor of the same {@link %{OperandType}} as the inputs, with shape
      *      [batch_size, num_units].
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation SVDF 27},
 
@@ -2370,7 +2370,7 @@
      *      For {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED},
      *      the scale must be 1.f / 128 and the zeroPoint must be 0.
 %/kind
-%insert-lines AVAIL1
+%insert AVAIL1
      */
     %{DeclareOperation TANH 28},
 %/section
@@ -2401,7 +2401,7 @@
      * * {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} (since %{NNAPILevel4})
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Inputs:
      * * 0: An n-D tensor, specifying the tensor to be reshaped
@@ -2424,7 +2424,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation BATCH_TO_SPACE_ND 29},
 
@@ -2455,7 +2455,7 @@
      *     input2.dimension = {5, 4, 3, 1}
      *     output.dimension = {5, 4, 3, 2}
      *
-%insert-lines GenericZero
+%insert GenericZero
      * Supported tensor {@link %{OperandType}}:
 %kind ndk hal_1.2+
      * * {@link %{OperandTypeLinkPfx}TENSOR_FLOAT16} (since %{NNAPILevel3})
@@ -2481,7 +2481,7 @@
      *
      * Outputs:
      * * 0: A tensor of the same {@link %{OperandType}} as input0.
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation DIV 30},
 
@@ -2531,7 +2531,7 @@
 %/kind
      *      If all dimensions are reduced and keep_dims is false, the output
      *      shape is [1].
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation MEAN 31},
 
@@ -2589,7 +2589,7 @@
      *      {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} is undefined.
      *      Since %{NNAPILevel3}, the pad value is always the logical zero.
 %/kind
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation PAD 32},
 
@@ -2619,7 +2619,7 @@
      *   (the pad value is undefined)
 %/kind
      *
-%insert-lines NHWC_NCHW
+%insert NHWC_NCHW
      *
      * Inputs:
      * * 0: An n-D tensor, specifying the input.
@@ -2656,7 +2656,7 @@
      *      {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} is undefined.
      *      Since %{NNAPILevel3}, the pad value is always the logical zero.
 %/kind
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation SPACE_TO_BATCH_ND 33},
 
@@ -2702,7 +2702,7 @@
 %/kind
      *      If all input dimensions are equal to 1 and are to be squeezed, the
      *      output shape is [1].
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation SQUEEZE 34},
 
@@ -2763,7 +2763,7 @@
 %/kind
      *      If shrink_axis_mask is true for all input dimensions, the output
      *      shape is [1].
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation STRIDED_SLICE 35},
 
@@ -2787,7 +2787,7 @@
      *     input2.dimension = {5, 4, 3, 1}
      *     output.dimension = {5, 4, 3, 2}
      *
-%insert-lines GenericZero
+%insert GenericZero
      * Supported tensor {@link %{OperandType}}:
 %kind ndk hal_1.2+
      * * {@link %{OperandTypeLinkPfx}TENSOR_FLOAT16} (since %{NNAPILevel3})
@@ -2826,7 +2826,7 @@
      *      {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
 %/kind
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation SUB 36},
 
@@ -2869,7 +2869,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL2
+%insert AVAIL2
      */
     %{DeclareOperation TRANSPOSE 37},
 %/section
@@ -2885,7 +2885,7 @@
      *
      * Values of this operand type are either true or false. A zero value
      * represents false; any other value represents true.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}BOOL = 6,
     /**
@@ -2896,12 +2896,12 @@
      * realValue = integerValue * scale.
      *
      * scale is a 32 bit floating point with value greater than zero.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_QUANT16_SYMM = 7,
     /**
      * A tensor of IEEE 754 16 bit floating point values.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_FLOAT16 = 8,
     /**
@@ -2909,12 +2909,12 @@
      *
      * Values of this operand type are either true or false. A zero value
      * represents false; any other value represents true.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_BOOL8 = 9,
     /**
      * An IEEE 754 16 bit floating point scalar value.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}FLOAT16 = 10,
     /**
@@ -2941,7 +2941,7 @@
      * realValue[..., C, ...] =
      *     integerValue[..., C, ...] * scales[C]
      * where C is an index in the Channel dimension.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_QUANT8_SYMM_PER_CHANNEL = 11,
     /**
@@ -2954,7 +2954,7 @@
      *
      * The formula is:
      * real_value = (integer_value - zeroPoint) * scale.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_QUANT16_ASYMM = 12,
     /**
@@ -2965,7 +2965,7 @@
      * realValue = integerValue * scale.
      *
      * scale is a 32 bit floating point with value greater than zero.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{ANN}TENSOR_QUANT8_SYMM = 13,
 %/section
@@ -2997,7 +2997,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 ABS 38},
 
@@ -3024,7 +3024,7 @@
      * Outputs:
      * * 0: An (n - 1)-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor.
      *      If input is 1-dimensional, the output shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     // There is no underscore in ARG_MAX to avoid name conflict with
     // the macro defined in libc/kernel/uapi/linux/limits.h.
@@ -3053,7 +3053,7 @@
      * Outputs:
      * * 0: An (n - 1)-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor.
      *      If input is 1-dimensional, the output shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 ARGMIN 40},  // See ARGMAX for naming discussion.
 
@@ -3105,7 +3105,7 @@
      *      output bounding box for each class, with format [x1, y1, x2, y2].
      *      For type of {@link %{OperandTypeLinkPfx}TENSOR_QUANT16_ASYMM}, the
      *      scale must be 0.125 and the zero point must be 0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 AXIS_ALIGNED_BBOX_TRANSFORM 41},
 
@@ -3408,8 +3408,8 @@
      *      then outputs 2-4 must be present as well.
      *      Available since %{NNAPILevel4}.
 %/kind
-%insert-lines AVAIL3
-%insert-lines OutputState
+%insert AVAIL3
+%insert OutputState
      */
     %{DeclareOperation_1.2 BIDIRECTIONAL_SEQUENCE_LSTM 42},
 
@@ -3575,8 +3575,8 @@
      *      2 must be present as well.
      *      Available since %{NNAPILevel4}.
 %/kind
-%insert-lines AVAIL3
-%insert-lines OutputState
+%insert AVAIL3
+%insert OutputState
      */
     %{DeclareOperation_1.2 BIDIRECTIONAL_SEQUENCE_RNN 43},
 
@@ -3669,7 +3669,7 @@
      * * 3: A 1-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor, of shape
      *      [num_output_rois], specifying the batch index of each box. Boxes
      *      with the same batch index are grouped together.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 BOX_WITH_NMS_LIMIT 44},
 
@@ -3703,7 +3703,7 @@
      *
      * Outputs:
      * * 0: A tensor with the same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 CAST 45},
 
@@ -3751,7 +3751,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 CHANNEL_SHUFFLE 46},
 
@@ -3832,7 +3832,7 @@
      *      output detection.
      * * 3: An 1-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor, of shape [batches],
      *      specifying the number of valid output detections for each batch.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 DETECTION_POSTPROCESSING 47},
 
@@ -3860,7 +3860,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 EQUAL 48},
 
@@ -3878,7 +3878,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 EXP 49},
 
@@ -3916,7 +3916,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint must be the same as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 EXPAND_DIMS 50},
 
@@ -3963,7 +3963,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint must be the same as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 GATHER 51},
 
@@ -4060,7 +4060,7 @@
      * * 2: A 1-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor, of shape
      *      [num_output_rois], specifying the batch index of each box. Boxes
      *      with the same batch index are grouped together.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 GENERATE_PROPOSALS 52},
 
@@ -4088,7 +4088,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 GREATER 53},
     /**
@@ -4115,7 +4115,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 GREATER_EQUAL 54},
 
@@ -4260,7 +4260,7 @@
      *      bias_scale[i] = input_scale * filter_scale[i].
      * * 3: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 4: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 5: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -4284,7 +4284,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 GROUPED_CONV_2D 55},
 
@@ -4348,7 +4348,7 @@
      *      [keypoint_x, keypoint_y].
      *      For type of {@link %{OperandTypeLinkPfx}TENSOR_QUANT16_ASYMM}, the
      *      scale must be 0.125 and the zero point must be 0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 HEATMAP_MAX_KEYPOINT 56},
 
@@ -4400,7 +4400,7 @@
      *
      * Outputs:
      * * 0: A tensor of the same {@link %{OperandType}} and same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 INSTANCE_NORMALIZATION 57},
 
@@ -4428,7 +4428,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LESS 58},
 
@@ -4456,7 +4456,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LESS_EQUAL 59},
 
@@ -4474,7 +4474,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LOG 60},
 
@@ -4495,7 +4495,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LOGICAL_AND 61},
 
@@ -4512,7 +4512,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LOGICAL_NOT 62},
 
@@ -4533,7 +4533,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LOGICAL_OR 63},
 
@@ -4565,7 +4565,7 @@
      * Outputs:
      * * 0: The output tensor of the same {@link %{OperandType}} and shape as
      *      input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 LOG_SOFTMAX 64},
 
@@ -4600,7 +4600,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 MAXIMUM 65},
 
@@ -4635,7 +4635,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 MINIMUM 66},
 
@@ -4654,7 +4654,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 NEG 67},
 
@@ -4682,7 +4682,7 @@
      *
      * Outputs:
      * * 0: A tensor of {@link %{OperandTypeLinkPfx}TENSOR_BOOL8}.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 NOT_EQUAL 68},
 
@@ -4739,7 +4739,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 PAD_V2 69},
 
@@ -4770,7 +4770,7 @@
      *
      * Outputs:
      * * 0: An output tensor.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 POW 70},
 
@@ -4819,7 +4819,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scales and zeroPoint can be different from input0 scale and zeroPoint.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 PRELU 71},
 
@@ -4860,7 +4860,7 @@
 %else
      *      {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM}.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 QUANTIZE 72},
 
@@ -4987,7 +4987,7 @@
      * Outputs:
      * * 0: A 2-D {@link %{OperandTypeLinkPfx}TENSOR_INT32} tensor with shape
      *      [batches, samples], containing the drawn samples.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 RANDOM_MULTINOMIAL 74},
 
@@ -5015,7 +5015,7 @@
      * * 0: A tensor of the same {@link %{OperandType}} as input0.
      *      If all dimensions are reduced and keep_dims is false, the output
      *      shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_ALL 75},
 
@@ -5043,7 +5043,7 @@
      * * 0: A tensor of the same {@link %{OperandType}} as input0.
      *      If all dimensions are reduced and keep_dims is false, the output
      *      shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_ANY 76},
 
@@ -5084,7 +5084,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_MAX 77},
 
@@ -5125,7 +5125,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_MIN 78},
 
@@ -5153,7 +5153,7 @@
      * * 0: A tensor of the same {@link %{OperandType}} as input0.
      *      If all dimensions are reduced and keep_dims is false, the output
      *      shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_PROD 79},
 
@@ -5181,7 +5181,7 @@
      * * 0: A tensor of the same {@link %{OperandType}} as input0.
      *      If all dimensions are reduced and keep_dims is false, the output
      *      shape is [1].
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 REDUCE_SUM 80},
 
@@ -5250,7 +5250,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint can be different from the input0 scale and zeroPoint.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 ROI_ALIGN 81},
 
@@ -5315,7 +5315,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint must be the same as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 ROI_POOLING 82},
 
@@ -5333,7 +5333,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 RSQRT 83},
 
@@ -5375,7 +5375,7 @@
      * * 0: A tensor of the same type and shape as input1 and input2.
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 SELECT 84},
 
@@ -5393,7 +5393,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 SIN 85},
 
@@ -5436,7 +5436,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      its scale and zeroPoint has to be same as the input0 scale and zeroPoint.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 SLICE 86},
 
@@ -5471,7 +5471,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 SPLIT 87},
 
@@ -5489,7 +5489,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape as input0.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 SQRT 88},
 
@@ -5528,7 +5528,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 TILE 89},
 
@@ -5567,7 +5567,7 @@
 %/kind
      * * 1: An n-D tensor of type {@link %{OperandTypeLinkPfx}TENSOR_INT32}
      *      containing the indices of values within the last dimension of input.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 TOPK_V2 90},
 
@@ -5697,7 +5697,7 @@
      *      tensor shape.
      * * 4: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the implicit
      *      padding scheme, has to be one of the
-%insert-lines PaddingCodeValues
+%insert PaddingCodeValues
      * * 5: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
      *      walking through input in the ‘width’ dimension.
      * * 6: An {@link %{OperandTypeLinkPfx}INT32} scalar, specifying the stride when
@@ -5718,7 +5718,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
 %/kind
      *      the scale and zeroPoint can be different from inputs' scale and zeroPoint.
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 TRANSPOSE_CONV_2D 91},
 
@@ -5839,8 +5839,8 @@
      *      and can be omitted.
      *      Available since %{NNAPILevel4}.
 %/kind
-%insert-lines AVAIL3
-%insert-lines OutputState
+%insert AVAIL3
+%insert OutputState
      */
     %{DeclareOperation_1.2 UNIDIRECTIONAL_SEQUENCE_LSTM 92},
 
@@ -5902,8 +5902,8 @@
      *      and can be omitted.
      *      Available since %{NNAPILevel4}.
 %/kind
-%insert-lines AVAIL3
-%insert-lines OutputState
+%insert AVAIL3
+%insert OutputState
      */
     %{DeclareOperation_1.2 UNIDIRECTIONAL_SEQUENCE_RNN 93},
 
@@ -5994,7 +5994,7 @@
      *      For a {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM} tensor,
      *      the scale and zeroPoint must be the same as input0.
 %/kind
-%insert-lines AVAIL3
+%insert AVAIL3
      */
     %{DeclareOperation_1.2 RESIZE_NEAREST_NEIGHBOR 94},
 %/section
@@ -6019,7 +6019,7 @@
      *
      * The formula is:
      * real_value = (integer_value - zeroPoint) * scale.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{ANN}TENSOR_QUANT8_ASYMM_SIGNED = 14,
 
@@ -6034,7 +6034,7 @@
      *
      * Must have the lifetime {@link OperandLifeTime::SUBGRAPH}.
 %/kind
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{ANN}%{MODEL_or_SUBGRAPH} = 15,
 %/section
@@ -6175,7 +6175,7 @@
      *      "output state (out)" value.
      *      Type: {@link %{OperandTypeLinkPfx}TENSOR_QUANT8_ASYMM_SIGNED}
      *      Shape: [batchSize, outputSize]
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 QUANTIZED_LSTM 95},
 
@@ -6205,7 +6205,7 @@
      *
      * Outputs:
      * * 0 ~ (m - 1): Outputs produced by the selected %{model_or_subgraph}.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 IF 96},
 
@@ -6286,7 +6286,7 @@
      *
      * Outputs:
      * * 0 ~ (m - 1): Outputs produced by the loop.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 WHILE 97},
 
@@ -6313,7 +6313,7 @@
      *
      * Outputs:
      * * 0: The output tensor of same shape and type as input0.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 ELU 98},
 
@@ -6342,7 +6342,7 @@
      * * 0: The output tensor of same shape and type as input0.
      *      Scale and zero point of this tensor may be different from the input
      *      tensor's parameters.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 HARD_SWISH 99},
 
@@ -6368,7 +6368,7 @@
      *
      * Outputs:
      * * 0: The output tensor.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 FILL 100},
 
@@ -6398,7 +6398,7 @@
      * Outputs:
      * * 0: A scalar of {@link %{OperandTypeLinkPfx}INT32}, specifying the rank
      *      of the input tensor.
-%insert-lines AVAIL4
+%insert AVAIL4
      */
     %{DeclareOperation_1.3 RANK 101},
 %/section