use free ports
diff --git a/tensorflow/python/data/experimental/ops/data_service_ops.py b/tensorflow/python/data/experimental/ops/data_service_ops.py
index 35554f0..205ed4f 100644
--- a/tensorflow/python/data/experimental/ops/data_service_ops.py
+++ b/tensorflow/python/data/experimental/ops/data_service_ops.py
@@ -321,14 +321,14 @@
To see the distributed operations in action, the `DispatchServer` should be
started first so that tf.data workers can register to it.
- >>> dispatcher = tf.data.experimental.service.DispatchServer(port=5000)
- >>> print(dispatcher.target) # prints grpc://localhost:5000
+ >>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
+ >>> print(dispatcher.target)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker1 = tf.data.experimental.service.WorkerServer(
- ... port=5001, dispatcher_address=dispatcher_address)
+ ... port=0, dispatcher_address=dispatcher_address)
>>> worker2 = tf.data.experimental.service.WorkerServer(
- ... port=5002, dispatcher_address=dispatcher_address)
+ ... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(5)
>>> dataset = dataset.map(lambda x: x*x)
>>> dataset = dataset.apply(
@@ -342,7 +342,9 @@
In the above example, the dataset operations (before applying the `distribute`
function on the elements) will be executed on the tf.data workers,
and the elements are provided over RPC. The remaining transformations
- (after the call to `distribute`) will be executed locally.
+ (after the call to `distribute`) will be executed locally. By setting the port
+ to 0 while creating the dispatcher and the workers, they will bind to usused
+ free ports which are chosen at random.
The `job_name` argument allows jobs to be shared across multiple
datasets. Instead of each dataset creating its own job, all
@@ -379,12 +381,6 @@
[0, 2, 3, 4]
```
- NOTE: The dispatcher server in the above mentioned examples is hosted at
- `grpc://localhost:5000` for demonstration purposes. However, the hostname
- and port can be modified as per the configuration and availability of
- resources. Also, to use protocols other than "grpc", they have to be
- registered by dynamically linking them into the tensorflow binary.
-
Job names must not be re-used across different training jobs within the
lifetime of the tf.data service. In general, the tf.data service is expected
to live for the duration of a single training job.