[tf.data service] Comment why auto-sharding does not use num_replicas.

This parameter is used internally by experimental_distribute_dataset to
rebatch the dataset:

https://github.com/tensorflow/tensorflow/blob/919f693420e35d00c8d0a42100837ae3718f7927/tensorflow/python/distribute/input_lib.py#L1246-L1250
https://github.com/tensorflow/tensorflow/blob/919f693420e35d00c8d0a42100837ae3718f7927/tensorflow/core/grappler/optimizers/data/auto_shard.cc#L577-L595

`rebatch` is not called outside `experimental_distribute_dataset`, so
the parameter is unused here.

PiperOrigin-RevId: 396437959
Change-Id: Ib4bb7443db002e09c38992b61fb666dd8bef2a7b
diff --git a/tensorflow/core/data/service/auto_shard_rewriter.cc b/tensorflow/core/data/service/auto_shard_rewriter.cc
index 482dc19..e08fb8e 100644
--- a/tensorflow/core/data/service/auto_shard_rewriter.cc
+++ b/tensorflow/core/data/service/auto_shard_rewriter.cc
@@ -135,6 +135,8 @@
       worker_index_);
   (*config.mutable_parameter_map())[AutoShardDatasetOp::kAutoShardPolicy].set_i(
       auto_shard_policy_);
+  // This parameter is used internally by tf.distribute to rebatch the dataset.
+  // It is not used outside the context of `experimental_distribute_dataset`.
   (*config.mutable_parameter_map())[AutoShardDatasetOp::kNumReplicas].set_i(1);
   return config;
 }