Reference pre-trained embedder models in Task Library documentation

PiperOrigin-RevId: 447830671
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_searcher.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_searcher.md
index dac3dd7..74b9ee5 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_searcher.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/image_searcher.md
@@ -30,12 +30,14 @@
 
 Before using the `ImageSearcher` API, an index needs to be built based on the
 custom corpus of images to search into. This can be achieved using
-[Model Maker](https://www.tensorflow.org/lite/guide/model_maker).
+[Model Maker ImageSearcher API](https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/searcher).
 
 For this you will need:
 
 *   a TFLite image embedder model such as
-    [mobilenet v3](https://tfhub.dev/google/lite-model/imagenet/mobilenet_v3_small_100_224/feature_vector/5/metadata/1),
+    [mobilenet v3](https://tfhub.dev/google/lite-model/imagenet/mobilenet_v3_small_100_224/feature_vector/5/metadata/1).
+    See more pretrained embedder models (a.k.a feature vector models) from the
+    [Google Image Modules collection on TensorFlow Hub](https://tfhub.dev/google/collections/image/1).
 *   your corpus of images.
 
 After this step, you should have a standalone TFLite searcher model (e.g.
diff --git a/tensorflow/lite/g3doc/inference_with_metadata/task_library/text_searcher.md b/tensorflow/lite/g3doc/inference_with_metadata/task_library/text_searcher.md
index 2ef1e64..faa7ad2 100644
--- a/tensorflow/lite/g3doc/inference_with_metadata/task_library/text_searcher.md
+++ b/tensorflow/lite/g3doc/inference_with_metadata/task_library/text_searcher.md
@@ -31,13 +31,21 @@
 
 Before using the `TextSearcher` API, an index needs to be built based on the
 custom corpus of text to search into. This can be achieved using
-[Model Maker](https://www.tensorflow.org/lite/guide/model_maker).
+[Model Maker TextSearcher API](https://www.tensorflow.org/lite/tutorials/model_maker_text_searcher).
 
 For this you will need:
 
-*   a TFLite text embedder model such as the
-    [universal sentence encoder](https://tfhub.dev/google/lite-model/universal-sentence-encoder-qa-ondevice/1)
-    model,
+*   a TFLite text embedder model, such as the Universal Sentence Encoder. For
+    example,
+    *   the
+        [one](https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/text_embedder.tflite)
+        retrained in this
+        [Colab](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/examples/colab/on_device_text_to_image_search_tflite.ipynb),
+        which is optimized for on-device inference. It takes only 6ms to query a
+        text string on Pixel 6.
+    *   the
+        [quantized](https://tfhub.dev/google/lite-model/universal-sentence-encoder-qa-ondevice/1)
+        one, which is smaller than the above but takes 38ms for each embedding.
 *   your corpus of text.
 
 After this step, you should have a standalone TFLite searcher model (e.g.