Docs: Updated links in docstrings for TF2
PiperOrigin-RevId: 272905147
diff --git a/tensorflow/api_template_v1.__init__.py b/tensorflow/api_template_v1.__init__.py
index deee1c5..97478a1 100644
--- a/tensorflow/api_template_v1.__init__.py
+++ b/tensorflow/api_template_v1.__init__.py
@@ -39,7 +39,7 @@
TensorFlow's `tf-nightly` package will soon be updated to TensorFlow 2.0.
Please upgrade your code to TensorFlow 2.0:
- * https://www.tensorflow.org/beta/guide/migration_guide
+ * https://www.tensorflow.org/guide/migrate
Or install the latest stable TensorFlow 1.X release:
* `pip install -U "tensorflow==1.*"`
diff --git a/tensorflow/examples/tutorials/deepdream/README.md b/tensorflow/examples/tutorials/deepdream/README.md
index 5fcbd7c..7486476 100644
--- a/tensorflow/examples/tutorials/deepdream/README.md
+++ b/tensorflow/examples/tutorials/deepdream/README.md
@@ -2,5 +2,5 @@
This example has moved.
-[A TensorFlow 2 version is available](https://tensorflow.org/en/beta/tutorials/generative/deepdream.ipynb)
+[A TensorFlow 2 version is available](https://tensorflow.org/tutorials/generative/deepdream)
[The original is in the TensorFlow examples Repository](https://github.com/tensorflow/examples/tree/master/community/en/r1/deepdream.ipynb)
diff --git a/tensorflow/examples/tutorials/deepdream/deepdream.ipynb b/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
index 0588dbc..f0dd720 100644
--- a/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
+++ b/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
@@ -19,7 +19,7 @@
"source": [
"This example has moved.\n",
"\n",
- "* [TensorFlow 2.0 version](https://tensorflow.org/en/beta/tutorials/generative/deepdream.ipynb)\n",
+ "* [TensorFlow 2.0 version](https://tensorflow.org/tutorials/generative/deepdream)\n",
"* [The Original](https://github.com/tensorflow/examples/tree/master/community/en/r1/deepdream.ipynb)"
]
}
diff --git a/tensorflow/python/autograph/g3doc/reference/index.md b/tensorflow/python/autograph/g3doc/reference/index.md
index 6fb7ab6..e94fccc 100644
--- a/tensorflow/python/autograph/g3doc/reference/index.md
+++ b/tensorflow/python/autograph/g3doc/reference/index.md
@@ -16,7 +16,7 @@
For more information on AutoGraph, see the following articles:
-* [AutoGraph tutorial](https://www.tensorflow.org/alpha/beta/autograph)
-* [Eager tutorial](https://www.tensorflow.org/alpha/guide/eager)
-* [TensorFlow 2.0 Alpha](https://www.tensorflow.org/alpha)
+* [AutoGraph guide](https://www.tensorflow.org/guide/function)
+* [tf.function tutorial](https://www.tensorflow.org/tutorials/customization/performance)
+* [Eager guide](https://www.tensorflow.org/guide/eager)
* [AutoGraph blog post](https://medium.com/tensorflow/autograph-converts-python-into-tensorflow-graphs-b2a871f87ec7)
diff --git a/tensorflow/python/autograph/impl/api.py b/tensorflow/python/autograph/impl/api.py
index ed29d55..8a6ea7e 100644
--- a/tensorflow/python/autograph/impl/api.py
+++ b/tensorflow/python/autograph/impl/api.py
@@ -601,7 +601,7 @@
argument called `self`.
For a tutorial, see the
- [tf.function and AutoGraph guide](https://www.tensorflow.org/beta/guide/autograph).
+ [tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function).
For more detailed information, see the
[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md).
diff --git a/tensorflow/python/distribute/distribute_lib.py b/tensorflow/python/distribute/distribute_lib.py
index e8a2dce..a13c7da 100644
--- a/tensorflow/python/distribute/distribute_lib.py
+++ b/tensorflow/python/distribute/distribute_lib.py
@@ -15,8 +15,8 @@
"""Library for running a computation across multiple devices.
See the guide for overview and examples:
-[TensorFlow v1.x](https://www.tensorflow.org/guide/distribute_strategy),
-[TensorFlow v2.x](https://www.tensorflow.org/alpha/guide/distribute_strategy).
+[TensorFlow v2.x](https://www.tensorflow.org/guide/distributed_training),
+[TensorFlow v1.x](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb). # pylint: disable=line-too-long
The intent of this library is that you can write an algorithm in a stylized way
and it will be usable with a variety of different `tf.distribute.Strategy`
@@ -416,18 +416,18 @@
class Strategy(object):
"""A state & compute distribution policy on a list of devices.
- See [the guide](https://www.tensorflow.org/alpha/guide/distribute_strategy)
+ See [the guide](https://www.tensorflow.org/guide/distributed_training)
for overview and examples.
In short:
* To use it with Keras `compile`/`fit`,
[please
- read](https://www.tensorflow.org/alpha/guide/distribute_strategy#using_tfdistributestrategy_with_keras).
+ read](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_keras).
* You may pass descendant of `tf.distribute.Strategy` to
`tf.estimator.RunConfig` to specify how a `tf.estimator.Estimator`
should distribute its computation. See
- [guide](https://www.tensorflow.org/alpha/guide/distribute_strategy#using_tfdistributestrategy_with_estimator).
+ [guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_estimator_limited_support).
* Otherwise, use `tf.distribute.Strategy.scope` to specify that a
strategy should be used when building an executing your model.
(This puts you in the "cross-replica context" for this strategy, which
@@ -435,7 +435,7 @@
* If you are writing a custom training loop, you will need to call a few more
methods,
[see the
- guide](https://www.tensorflow.org/alpha/guide/distribute_strategy#using_tfdistributestrategy_with_custom_training_loops):
+ guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_custom_training_loops):
* Start by either creating a `tf.data.Dataset` normally or using
`tf.distribute.experimental_make_numpy_dataset` to make a dataset out of
@@ -491,7 +491,7 @@
See the
[custom training loop
- tutorial](https://www.tensorflow.org/alpha/tutorials/distribute/training_loops)
+ tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training)
for a more detailed example.
Note: `tf.distribute.Strategy` currently does not support TensorFlow's
diff --git a/tensorflow/python/eager/def_function.py b/tensorflow/python/eager/def_function.py
index 3a0da77..d353474 100644
--- a/tensorflow/python/eager/def_function.py
+++ b/tensorflow/python/eager/def_function.py
@@ -565,7 +565,7 @@
"due to passing python objects instead of tensors. Also, tf.function "
"has experimental_relax_shapes=True option that relaxes argument "
"shapes that can avoid unnecessary retracing. Please refer to "
- "https://www.tensorflow.org/beta/tutorials/eager/tf_function#python_or_tensor_args"
+ "https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args"
" and https://www.tensorflow.org/api_docs/python/tf/function for more "
"details.".format(recent_tracing_count, self._call_counter.call_count,
self._python_function))
@@ -1112,7 +1112,7 @@
autograph: Whether autograph should be applied on `func` before tracing a
graph. Data-dependent control flow requires `autograph=True`. For more
information, see the [tf.function and AutoGraph guide](
- https://www.tensorflow.org/beta/guide/autograph).
+ https://www.tensorflow.org/guide/function).
experimental_implements: If provided, contains a name of a "known" function
this implements. For example "mycompany.my_recurrent_cell".
This is stored as an attribute in inference function,
diff --git a/tensorflow/python/keras/engine/network.py b/tensorflow/python/keras/engine/network.py
index 1076844..4845980 100644
--- a/tensorflow/python/keras/engine/network.py
+++ b/tensorflow/python/keras/engine/network.py
@@ -1025,7 +1025,7 @@
means saving a `tf.keras.Model` using `save_weights` and loading into a
`tf.train.Checkpoint` with a `Model` attached (or vice versa) will not match
the `Model`'s variables. See the [guide to training
- checkpoints](https://www.tensorflow.org/alpha/guide/checkpoints) for details
+ checkpoints](https://www.tensorflow.org/guide/checkpoint) for details
on the TensorFlow format.
Arguments:
diff --git a/tensorflow/python/keras/layers/core.py b/tensorflow/python/keras/layers/core.py
index 44f92e5..c66c87f 100644
--- a/tensorflow/python/keras/layers/core.py
+++ b/tensorflow/python/keras/layers/core.py
@@ -89,7 +89,7 @@
```
See [the masking and padding
- guide](https://www.tensorflow.org/beta/guide/keras/masking_and_padding)
+ guide](https://www.tensorflow.org/guide/keras/masking_and_padding)
for more details.
"""
diff --git a/tensorflow/python/keras/layers/recurrent.py b/tensorflow/python/keras/layers/recurrent.py
index 875cfee..37ac80d 100644
--- a/tensorflow/python/keras/layers/recurrent.py
+++ b/tensorflow/python/keras/layers/recurrent.py
@@ -189,7 +189,7 @@
class RNN(Layer):
"""Base class for recurrent layers.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
Arguments:
@@ -982,7 +982,7 @@
class AbstractRNNCell(Layer):
"""Abstract object representing an RNN cell.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
This is the base class for implementing RNN cells with custom behavior.
@@ -1202,7 +1202,7 @@
class SimpleRNNCell(DropoutRNNCellMixin, Layer):
"""Cell class for SimpleRNN.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
This class processes one step within the whole time sequence input, whereas
@@ -1393,7 +1393,7 @@
class SimpleRNN(RNN):
"""Fully-connected RNN where the output is to be fed back to input.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
Arguments:
diff --git a/tensorflow/python/keras/layers/recurrent_v2.py b/tensorflow/python/keras/layers/recurrent_v2.py
index 1d18fcc..5ead09d 100644
--- a/tensorflow/python/keras/layers/recurrent_v2.py
+++ b/tensorflow/python/keras/layers/recurrent_v2.py
@@ -57,7 +57,7 @@
class GRUCell(recurrent.GRUCell):
"""Cell class for the GRU layer.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
This class processes one step within the whole time sequence input, whereas
@@ -177,7 +177,7 @@
class GRU(recurrent.DropoutRNNCellMixin, recurrent.GRU):
"""Gated Recurrent Unit - Cho et al. 2014.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
Based on available runtime hardware and constraints, this layer
@@ -763,7 +763,7 @@
class LSTMCell(recurrent.LSTMCell):
"""Cell class for the LSTM layer.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
This class processes one step within the whole time sequence input, whereas
@@ -884,7 +884,7 @@
class LSTM(recurrent.DropoutRNNCellMixin, recurrent.LSTM):
"""Long Short-Term Memory layer - Hochreiter 1997.
- See [the Keras RNN API guide](https://www.tensorflow.org/beta/guide/keras/rnn)
+ See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
Based on available runtime hardware and constraints, this layer
diff --git a/tensorflow/python/keras/losses.py b/tensorflow/python/keras/losses.py
index 3c0c511..cab25e6 100644
--- a/tensorflow/python/keras/losses.py
+++ b/tensorflow/python/keras/losses.py
@@ -62,7 +62,7 @@
'SUM_OVER_BATCH_SIZE' will raise an error.
Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more
+ https://www.tensorflow.org/tutorials/distribute/custom_training for more
details on this.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
@@ -83,7 +83,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: Optional name for the op.
"""
@@ -169,7 +169,7 @@
'reduction=tf.keras.losses.Reduction.NONE)\n....\n'
' loss = tf.reduce_sum(loss_obj(labels, predictions)) * '
'(1. / global_batch_size)\n```\nPlease see '
- 'https://www.tensorflow.org/alpha/tutorials/distribute/training_loops'
+ 'https://www.tensorflow.org/tutorials/distribute/custom_training'
' for more details.')
if self.reduction == losses_utils.ReductionV2.AUTO:
@@ -190,7 +190,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: (Optional) name for the loss.
**kwargs: The keyword arguments that are passed on to `fn`.
@@ -387,7 +387,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: (Optional) Name for the op.
"""
@@ -451,7 +451,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: Optional name for the op.
"""
@@ -512,7 +512,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: Optional name for the op.
"""
@@ -746,7 +746,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: Optional name for the op.
"""
@@ -1128,7 +1128,7 @@
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, using `AUTO` or
`SUM_OVER_BATCH_SIZE` will raise an error. Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops
+ https://www.tensorflow.org/tutorials/distribute/custom_training
for more details on this.
name: Optional name for the op.
"""
diff --git a/tensorflow/python/keras/saving/saved_model/README.md b/tensorflow/python/keras/saving/saved_model/README.md
index 0c4a602..b0bf81c 100644
--- a/tensorflow/python/keras/saving/saved_model/README.md
+++ b/tensorflow/python/keras/saving/saved_model/README.md
@@ -15,8 +15,8 @@
Please see the links below for more details:
-- [Saved Model Guide](https://www.tensorflow.org/beta/guide/saved_model)
-- [Checkpoint Guide](https://www.tensorflow.org/beta/guide/checkpoints)
+- [Saved Model Guide](https://www.tensorflow.org/guide/saved_model)
+- [Checkpoint Guide](https://www.tensorflow.org/guide/checkpoint)
## Keras SavedModel implementation
diff --git a/tensorflow/python/ops/losses/loss_reduction.py b/tensorflow/python/ops/losses/loss_reduction.py
index 7fdc791..829bc2f 100644
--- a/tensorflow/python/ops/losses/loss_reduction.py
+++ b/tensorflow/python/ops/losses/loss_reduction.py
@@ -48,9 +48,9 @@
(1. / global_batch_size)
```
- Please see
- https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for
- more details on this.
+ Please see the
+ [custom training guide](https://www.tensorflow.org/tutorials/distribute/custom_training) # pylint: disable=line-too-long
+ for more details on this.
"""
AUTO = 'auto'
diff --git a/tensorflow/python/ops/variables.py b/tensorflow/python/ops/variables.py
index b24cf24..011e93e 100644
--- a/tensorflow/python/ops/variables.py
+++ b/tensorflow/python/ops/variables.py
@@ -264,7 +264,7 @@
@tf_export("Variable", v1=[])
class Variable(six.with_metaclass(VariableMetaclass, trackable.Trackable)):
- """See the [Variables Guide](https://tensorflow.org/beta/guide/variables).
+ """See the [variable guide](https://tensorflow.org/guide/variable).
A variable maintains shared, persistent state manipulated by a program.
@@ -322,9 +322,9 @@
>>> m.trainable_variables
(<tf.Variable ... shape=(1,) ... numpy=array([1.], dtype=float32)>,)
- This tracking then allows saving variable values to [training
- checkpoints](https://www.tensorflow.org/beta/guide/checkpoints), or to
- [SavedModels](https://www.tensorflow.org/beta/guide/saved_model) which include
+ This tracking then allows saving variable values to
+ [training checkpoints](https://www.tensorflow.org/guide/checkpoint), or to
+ [SavedModels](https://www.tensorflow.org/guide/saved_model) which include
serialized TensorFlow graphs.
Variables are often captured and manipulated by `tf.function`s. This works the
diff --git a/tensorflow/python/training/tracking/util.py b/tensorflow/python/training/tracking/util.py
index 5790d57..822b3dd 100644
--- a/tensorflow/python/training/tracking/util.py
+++ b/tensorflow/python/training/tracking/util.py
@@ -155,7 +155,7 @@
"load status object, e.g. "
"tf.train.Checkpoint.restore(...).expect_partial(), to silence these "
"warnings, or use assert_consumed() to make the check explicit. See "
- "https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics"
+ "https://www.tensorflow.org/guide/checkpoint#loading_mechanics"
" for details.")
@@ -1412,7 +1412,7 @@
`save_weights` and loading into a `tf.train.Checkpoint` with a `Model`
attached (or vice versa) will not match the `Model`'s variables. See the
[guide to training
- checkpoints](https://www.tensorflow.org/alpha/guide/checkpoints) for
+ checkpoints](https://www.tensorflow.org/guide/checkpoint) for
details. Prefer `tf.train.Checkpoint` over `tf.keras.Model.save_weights` for
training checkpoints.
@@ -1749,7 +1749,7 @@
`save_weights` and loading into a `tf.train.Checkpoint` with a `Model`
attached (or vice versa) will not match the `Model`'s variables. See the
[guide to training
- checkpoints](https://www.tensorflow.org/alpha/guide/checkpoints) for
+ checkpoints](https://www.tensorflow.org/guide/checkpoint) for
details. Prefer `tf.train.Checkpoint` over `tf.keras.Model.save_weights` for
training checkpoints.
diff --git a/tensorflow/tools/docs/generate2.py b/tensorflow/tools/docs/generate2.py
index 6982081..acbec29 100644
--- a/tensorflow/tools/docs/generate2.py
+++ b/tensorflow/tools/docs/generate2.py
@@ -162,7 +162,7 @@
other projects like [`tensorflow_io`](https://github.com/tensorflow/io), or
[`tensorflow_addons`](https://github.com/tensorflow/addons). For instructions
on how to upgrade see the
- [Migration guide](https://www.tensorflow.org/beta/guide/migration_guide).
+ [Migration guide](https://www.tensorflow.org/guide/migrate).
"""
else:
tf.raw_ops.__doc__ += _raw_ops_doc