Update docs in control_flow_ops.py for consistent Markdown rendering
diff --git a/tensorflow/python/ops/parallel_for/control_flow_ops.py b/tensorflow/python/ops/parallel_for/control_flow_ops.py
index a764977..d145a7b 100644
--- a/tensorflow/python/ops/parallel_for/control_flow_ops.py
+++ b/tensorflow/python/ops/parallel_for/control_flow_ops.py
@@ -51,8 +51,8 @@
     loop_fn: A function that takes an int32 scalar tf.Tensor object representing
       the iteration number, and returns a possibly nested structure of tensor
       objects. The shape of these outputs should not depend on the input.
-    loop_fn_dtypes: dtypes for the outputs of loop_fn.
-    iters: Number of iterations for which to run loop_fn.
+    loop_fn_dtypes: dtypes for the outputs of `loop_fn`.
+    iters: Number of iterations for which to run `loop_fn`.
     parallel_iterations: The number of iterations that can be dispatched in
       parallel. This knob can be used to control the total memory usage.
 
@@ -137,7 +137,7 @@
 
   `pfor` has functionality similar to `for_loop`, i.e. running `loop_fn` `iters`
   times, with input from 0 to `iters - 1`, and stacking corresponding output of
-  each iteration. However the implementation does not use a tf.while_loop.
+  each iteration. However the implementation does not use a `tf.while_loop`.
   Instead it adds new operations to the graph that collectively compute the same
   value as what running `loop_fn` in a loop would compute.
 
@@ -152,7 +152,7 @@
       reads, etc).
     - Conversion works only on a limited set of kernels for which a converter
       has been registered.
-    - loop_fn has limited support for control flow operations. tf.cond in
+    - `loop_fn` has limited support for control flow operations. `tf.cond` in
       particular is not supported.
     - `loop_fn` should return nested structure of Tensors or Operations. However
       if an Operation is returned, it should have zero outputs.
@@ -166,9 +166,9 @@
       or Operation objects. Note that if setting `parallel_iterations` argument
       to something other than None, `loop_fn` may be called more than once
       during graph construction. So it may need to avoid mutating global state.
-    iters: Number of iterations for which to run loop_fn.
+    iters: Number of iterations for which to run `loop_fn`.
     fallback_to_while_loop: If true, on failing to vectorize an operation, pfor
-      fallbacks to using a tf.while_loop to dispatch the iterations.
+      fallbacks to using a `tf.while_loop` to dispatch the iterations.
     parallel_iterations: A knob to control how many iterations are vectorized
       and dispatched in parallel. The default value of None corresponds to
       vectorizing all the iterations.  If `parallel_iterations` is smaller than
@@ -337,7 +337,7 @@
   """Parallel map on the list of tensors unpacked from `elems` on dimension 0.
 
 
-  This method works similar to tf.map_fn but is optimized to run much faster,
+  This method works similar to `tf.map_fn` but is optimized to run much faster,
   possibly with a much larger memory footprint. The speedups are obtained by
   vectorization (see https://arxiv.org/pdf/1903.04243.pdf). The idea behind
   vectorization is to semantically launch all the invocations of `fn` in