update API doc page: tf.data.Dataset
- update markdown list for `tf.data.Dataset.list_files` example
- change `NOTE:` to `Note:` to keep a consistent style in API doc
diff --git a/tensorflow/python/data/ops/dataset_ops.py b/tensorflow/python/data/ops/dataset_ops.py
index 3e10479..90e01cb 100644
--- a/tensorflow/python/data/ops/dataset_ops.py
+++ b/tensorflow/python/data/ops/dataset_ops.py
@@ -687,7 +687,7 @@
>>> list(dataset.take(3).as_numpy_iterator())
[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))]
- NOTE: The current implementation of `Dataset.from_generator()` uses
+ Note: The current implementation of `Dataset.from_generator()` uses
`tf.numpy_function` and inherits the same constraints. In particular, it
requires the `Dataset`- and `Iterator`-related operations to be placed
on a device in the same process as the Python program that called
@@ -695,7 +695,7 @@
serialized in a `GraphDef`, and you should not use this method if you
need to serialize your model and restore it in a different environment.
- NOTE: If `generator` depends on mutable global variables or other external
+ Note: If `generator` depends on mutable global variables or other external
state, be aware that the runtime may invoke `generator` multiple times
(in order to support repeating the `Dataset`) and at any time
between the call to `Dataset.from_generator()` and the production of the
@@ -1013,17 +1013,20 @@
filename with `list_files` may result in poor performance with remote
storage systems.
- NOTE: The default behavior of this method is to return filenames in
+ Note: The default behavior of this method is to return filenames in
a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False`
to get results in a deterministic order.
Example:
If we had the following files on our filesystem:
+
- /path/to/dir/a.txt
- /path/to/dir/b.py
- /path/to/dir/c.py
+
If we pass "/path/to/dir/*.py" as the directory, the dataset
would produce:
+
- /path/to/dir/b.py
- /path/to/dir/c.py
@@ -1077,7 +1080,7 @@
>>> list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
- NOTE: If this dataset is a function of global state (e.g. a random number
+ Note: If this dataset is a function of global state (e.g. a random number
generator), then different repetitions may produce different elements.
Args:
@@ -1331,6 +1334,7 @@
Raises:
InvalidArgumentError: if `num_shards` or `index` are illegal values.
+
Note: error checking is done on a best-effort basis, and errors aren't
guaranteed to be caught upon dataset creation. (e.g. providing in a
placeholder tensor bypasses the early checking, and will instead result
@@ -1688,7 +1692,7 @@
5, 5, 5, 5,
5, 5]
- NOTE: The order of elements yielded by this transformation is
+ Note: The order of elements yielded by this transformation is
deterministic, as long as `map_func` is a pure function and
`deterministic=True`. If `map_func` contains any stateful operations, the
order in which that state is accessed is undefined.
@@ -2352,7 +2356,7 @@
deterministic=None):
"""Maps `map_func` across the elements of this dataset.
- NOTE: This is an escape hatch for existing uses of `map` that do not work
+ Note: This is an escape hatch for existing uses of `map` that do not work
with V2 functions. New uses are strongly discouraged and existing uses
should migrate to `map` as this method will be removed in V2.
@@ -2415,7 +2419,7 @@
def filter_with_legacy_function(self, predicate):
"""Filters this dataset according to `predicate`.
- NOTE: This is an escape hatch for existing uses of `filter` that do not work
+ Note: This is an escape hatch for existing uses of `filter` that do not work
with V2 functions. New uses are strongly discouraged and existing uses
should migrate to `filter` as this method will be removed in V2.