|  | <html><body> | 
|  | <style> | 
|  |  | 
|  | body, h1, h2, h3, div, span, p, pre, a { | 
|  | margin: 0; | 
|  | padding: 0; | 
|  | border: 0; | 
|  | font-weight: inherit; | 
|  | font-style: inherit; | 
|  | font-size: 100%; | 
|  | font-family: inherit; | 
|  | vertical-align: baseline; | 
|  | } | 
|  |  | 
|  | body { | 
|  | font-size: 13px; | 
|  | padding: 1em; | 
|  | } | 
|  |  | 
|  | h1 { | 
|  | font-size: 26px; | 
|  | margin-bottom: 1em; | 
|  | } | 
|  |  | 
|  | h2 { | 
|  | font-size: 24px; | 
|  | margin-bottom: 1em; | 
|  | } | 
|  |  | 
|  | h3 { | 
|  | font-size: 20px; | 
|  | margin-bottom: 1em; | 
|  | margin-top: 1em; | 
|  | } | 
|  |  | 
|  | pre, code { | 
|  | line-height: 1.5; | 
|  | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; | 
|  | } | 
|  |  | 
|  | pre { | 
|  | margin-top: 0.5em; | 
|  | } | 
|  |  | 
|  | h1, h2, h3, p { | 
|  | font-family: Arial, sans serif; | 
|  | } | 
|  |  | 
|  | h1, h2, h3 { | 
|  | border-bottom: solid #CCC 1px; | 
|  | } | 
|  |  | 
|  | .toc_element { | 
|  | margin-top: 0.5em; | 
|  | } | 
|  |  | 
|  | .firstline { | 
|  | margin-left: 2 em; | 
|  | } | 
|  |  | 
|  | .method  { | 
|  | margin-top: 1em; | 
|  | border: solid 1px #CCC; | 
|  | padding: 1em; | 
|  | background: #EEE; | 
|  | } | 
|  |  | 
|  | .details { | 
|  | font-weight: bold; | 
|  | font-size: 14px; | 
|  | } | 
|  |  | 
|  | </style> | 
|  |  | 
|  | <h1><a href="vision_v1.html">Cloud Vision API</a> . <a href="vision_v1.projects.html">projects</a> . <a href="vision_v1.projects.files.html">files</a></h1> | 
|  | <h2>Instance Methods</h2> | 
|  | <p class="toc_element"> | 
|  | <code><a href="#annotate">annotate(parent, body=None, x__xgafv=None)</a></code></p> | 
|  | <p class="firstline">Service that performs image detection and annotation for a batch of files.</p> | 
|  | <p class="toc_element"> | 
|  | <code><a href="#asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</a></code></p> | 
|  | <p class="firstline">Run asynchronous image detection and annotation for a list of generic</p> | 
|  | <h3>Method Details</h3> | 
|  | <div class="method"> | 
|  | <code class="details" id="annotate">annotate(parent, body=None, x__xgafv=None)</code> | 
|  | <pre>Service that performs image detection and annotation for a batch of files. | 
|  | Now only "application/pdf", "image/tiff" and "image/gif" are supported. | 
|  |  | 
|  | This service will extract at most 5 (customers can specify which 5 in | 
|  | AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each | 
|  | file provided and perform detection and annotation for each image | 
|  | extracted. | 
|  |  | 
|  | Args: | 
|  | parent: string, Optional. Target project and location to make a call. | 
|  |  | 
|  | Format: `projects/{project-id}/locations/{location-id}`. | 
|  |  | 
|  | If no parent is specified, a region will be chosen automatically. | 
|  |  | 
|  | Supported location-ids: | 
|  | `us`: USA country only, | 
|  | `asia`: East asia areas, like Japan, Taiwan, | 
|  | `eu`: The European Union. | 
|  |  | 
|  | Example: `projects/project-A/locations/eu`. (required) | 
|  | body: object, The request body. | 
|  | The object takes the form of: | 
|  |  | 
|  | { # A list of requests to annotate files using the BatchAnnotateFiles API. | 
|  | "parent": "A String", # Optional. Target project and location to make a call. | 
|  | # | 
|  | # Format: `projects/{project-id}/locations/{location-id}`. | 
|  | # | 
|  | # If no parent is specified, a region will be chosen automatically. | 
|  | # | 
|  | # Supported location-ids: | 
|  | #     `us`: USA country only, | 
|  | #     `asia`: East asia areas, like Japan, Taiwan, | 
|  | #     `eu`: The European Union. | 
|  | # | 
|  | # Example: `projects/project-A/locations/eu`. | 
|  | "requests": [ # Required. The list of file annotation requests. Right now we support only one | 
|  | # AnnotateFileRequest in BatchAnnotateFilesRequest. | 
|  | { # A request to annotate one single file, e.g. a PDF, TIFF or GIF file. | 
|  | "inputConfig": { # The desired input location and metadata. # Required. Information about the input file. | 
|  | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | # "image/gif" are supported. Wildcards are not supported. | 
|  | "content": "A String", # File content, represented as a stream of bytes. | 
|  | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | # representation, whereas JSON representations use base64. | 
|  | # | 
|  | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | # not work for AsyncBatchAnnotateFiles requests. | 
|  | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | }, | 
|  | }, | 
|  | "features": [ # Required. Requested features. | 
|  | { # The type of Google Cloud Vision API detection to perform, and the maximum | 
|  | # number of results to return for that type. Multiple `Feature` objects can | 
|  | # be specified in the `features` list. | 
|  | "type": "A String", # The feature type. | 
|  | "maxResults": 42, # Maximum number of results of this type. Does not apply to | 
|  | # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`. | 
|  | "model": "A String", # Model to use for the feature. | 
|  | # Supported values: "builtin/stable" (the default if unset) and | 
|  | # "builtin/latest". | 
|  | }, | 
|  | ], | 
|  | "imageContext": { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file. | 
|  | "languageHints": [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value | 
|  | # yields the best results since it enables automatic language detection. For | 
|  | # languages based on the Latin alphabet, setting `language_hints` is not | 
|  | # needed. In rare cases, when the language of the text in the image is known, | 
|  | # setting a hint will help get better results (although it will be a | 
|  | # significant hindrance if the hint is wrong). Text detection returns an | 
|  | # error if one or more of the specified languages is not one of the | 
|  | # [supported languages](https://cloud.google.com/vision/docs/languages). | 
|  | "A String", | 
|  | ], | 
|  | "webDetectionParams": { # Parameters for web detection request. # Parameters for web detection. | 
|  | "includeGeoResults": True or False, # Whether to include results derived from the geo information in the image. | 
|  | }, | 
|  | "latLongRect": { # Rectangle determined by min and max `LatLng` pairs. # Not used. | 
|  | "maxLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | "minLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | "cropHintsParams": { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request. | 
|  | "aspectRatios": [ # Aspect ratios in floats, representing the ratio of the width to the height | 
|  | # of the image. For example, if the desired aspect ratio is 4/3, the | 
|  | # corresponding float value should be 1.33333.  If not specified, the | 
|  | # best possible crop is returned. The number of provided aspect ratios is | 
|  | # limited to a maximum of 16; any aspect ratios provided after the 16th are | 
|  | # ignored. | 
|  | 3.14, | 
|  | ], | 
|  | }, | 
|  | "productSearchParams": { # Parameters for a product search request. # Parameters for product search. | 
|  | "filter": "A String", # The filtering expression. This can be used to restrict search results based | 
|  | # on Product labels. We currently support an AND of OR of key-value | 
|  | # expressions, where each expression within an OR must have the same key. An | 
|  | # '=' should be used to connect the key and value. | 
|  | # | 
|  | # For example, "(color = red OR color = blue) AND brand = Google" is | 
|  | # acceptable, but "(color = red OR brand = Google)" is not acceptable. | 
|  | # "color: red" is not acceptable because it uses a ':' instead of an '='. | 
|  | "productSet": "A String", # The resource name of a ProductSet to be searched for similar images. | 
|  | # | 
|  | # Format is: | 
|  | # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image. | 
|  | # If it is not specified, system discretion will be applied. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "productCategories": [ # The list of product categories to search in. Currently, we only consider | 
|  | # the first category, and either "homegoods-v2", "apparel-v2", "toys-v2", | 
|  | # "packagedgoods-v1", or "general-v1" should be specified. The legacy | 
|  | # categories "homegoods", "apparel", and "toys" are still supported but will | 
|  | # be deprecated. For new products, please use "homegoods-v2", "apparel-v2", | 
|  | # or "toys-v2" for better product search accuracy. It is recommended to | 
|  | # migrate existing products to these categories as well. | 
|  | "A String", | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | "pages": [ # Pages of the file to perform image annotation. | 
|  | # | 
|  | # Pages starts from 1, we assume the first page of the file is page 1. | 
|  | # At most 5 pages are supported per request. Pages can be negative. | 
|  | # | 
|  | # Page 1 means the first page. | 
|  | # Page 2 means the second page. | 
|  | # Page -1 means the last page. | 
|  | # Page -2 means the second to the last page. | 
|  | # | 
|  | # If the file is GIF instead of PDF or TIFF, page refers to GIF frames. | 
|  | # | 
|  | # If this field is empty, by default the service performs image annotation | 
|  | # for the first 5 pages of the file. | 
|  | 42, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | } | 
|  |  | 
|  | x__xgafv: string, V1 error format. | 
|  | Allowed values | 
|  | 1 - v1 error format | 
|  | 2 - v2 error format | 
|  |  | 
|  | Returns: | 
|  | An object of the form: | 
|  |  | 
|  | { # A list of file annotation responses. | 
|  | "responses": [ # The list of file annotation responses, each response corresponding to each | 
|  | # AnnotateFileRequest in BatchAnnotateFilesRequest. | 
|  | { # Response to a single file annotation request. A file may contain one or more | 
|  | # images, which individually have their own responses. | 
|  | "responses": [ # Individual responses to images found within the file. This field will be | 
|  | # empty if the `error` field is set. | 
|  | { # Response to an image annotation request. | 
|  | "landmarkAnnotations": [ # If present, landmark detection has completed successfully. | 
|  | { # Set of detected entity features. | 
|  | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
|  | "locations": [ # The location information for the detected entity. Multiple | 
|  | # `LocationInfo` elements can be present because one location may | 
|  | # indicate the location of the scene in the image, and another location | 
|  | # may indicate the location of the place where the image was taken. | 
|  | # Location information is usually present for landmarks. | 
|  | { # Detected entity location information. | 
|  | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | # [Google Knowledge Graph Search | 
|  | # API](https://developers.google.com/knowledge-graph/). | 
|  | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | # The accuracy of the entity detection in an image. | 
|  | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | # this field represents the confidence that there is a tower in the query | 
|  | # image. Range [0, 1]. | 
|  | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | # `description` is expressed. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | # for `LABEL_DETECTION` features. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
|  | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | # detected distant towering building, even though the confidence that | 
|  | # there is a tower in each image may be the same. Range [0, 1]. | 
|  | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | # fields, such a score or string that qualifies the entity. | 
|  | { # A `Property` consists of a user-supplied name/value pair. | 
|  | "value": "A String", # Value of the property. | 
|  | "uint64Value": "A String", # Value of numeric properties. | 
|  | "name": "A String", # Name of the property. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "faceAnnotations": [ # If present, face detection has completed successfully. | 
|  | { # A face annotation object contains the results of face detection. | 
|  | "sorrowLikelihood": "A String", # Sorrow likelihood. | 
|  | "tiltAngle": 3.14, # Pitch angle, which indicates the upwards/downwards angle that the face is | 
|  | # pointing relative to the image's horizontal plane. Range [-180,180]. | 
|  | "fdBoundingPoly": { # A bounding polygon for the detected image annotation. # The `fd_bounding_poly` bounding polygon is tighter than the | 
|  | # `boundingPoly`, and encloses only the skin part of the face. Typically, it | 
|  | # is used to eliminate the face from any image analysis that detects the | 
|  | # "amount of skin" visible in an image. It is not based on the | 
|  | # landmarker results, only on the initial face detection, hence | 
|  | # the <code>fd</code> (face detection) prefix. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "landmarks": [ # Detected face landmarks. | 
|  | { # A face-specific landmark (for example, a face feature). | 
|  | "type": "A String", # Face landmark type. | 
|  | "position": { # A 3D position in the image, used primarily for Face detection landmarks. # Face landmark position. | 
|  | # A valid Position must have both x and y coordinates. | 
|  | # The position coordinates are in the same scale as the original image. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | "z": 3.14, # Z coordinate (or depth). | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "surpriseLikelihood": "A String", # Surprise likelihood. | 
|  | "angerLikelihood": "A String", # Anger likelihood. | 
|  | "landmarkingConfidence": 3.14, # Face landmarking confidence. Range [0, 1]. | 
|  | "joyLikelihood": "A String", # Joy likelihood. | 
|  | "underExposedLikelihood": "A String", # Under-exposed likelihood. | 
|  | "panAngle": 3.14, # Yaw angle, which indicates the leftward/rightward angle that the face is | 
|  | # pointing relative to the vertical plane perpendicular to the image. Range | 
|  | # [-180,180]. | 
|  | "detectionConfidence": 3.14, # Detection confidence. Range [0, 1]. | 
|  | "blurredLikelihood": "A String", # Blurred likelihood. | 
|  | "headwearLikelihood": "A String", # Headwear likelihood. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the face. The coordinates of the bounding box | 
|  | # are in the original image's scale. | 
|  | # The bounding box is computed to "frame" the face in accordance with human | 
|  | # expectations. It is based on the landmarker results. | 
|  | # Note that one or more x and/or y coordinates may not be generated in the | 
|  | # `BoundingPoly` (the polygon will be unbounded) if only a partial face | 
|  | # appears in the image to be annotated. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "rollAngle": 3.14, # Roll angle, which indicates the amount of clockwise/anti-clockwise rotation | 
|  | # of the face relative to the image vertical about the axis perpendicular to | 
|  | # the face. Range [-180,180]. | 
|  | }, | 
|  | ], | 
|  | "cropHintsAnnotation": { # Set of crop hints that are used to generate new crops when serving images. # If present, crop hints have completed successfully. | 
|  | "cropHints": [ # Crop hint results. | 
|  | { # Single crop hint that is used to generate a new crop when serving an image. | 
|  | "confidence": 3.14, # Confidence of this being a salient region.  Range [0, 1]. | 
|  | "importanceFraction": 3.14, # Fraction of importance of this salient region with respect to the original | 
|  | # image. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon for the crop region. The coordinates of the bounding | 
|  | # box are in the original image's scale. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "labelAnnotations": [ # If present, label detection has completed successfully. | 
|  | { # Set of detected entity features. | 
|  | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
|  | "locations": [ # The location information for the detected entity. Multiple | 
|  | # `LocationInfo` elements can be present because one location may | 
|  | # indicate the location of the scene in the image, and another location | 
|  | # may indicate the location of the place where the image was taken. | 
|  | # Location information is usually present for landmarks. | 
|  | { # Detected entity location information. | 
|  | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | # [Google Knowledge Graph Search | 
|  | # API](https://developers.google.com/knowledge-graph/). | 
|  | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | # The accuracy of the entity detection in an image. | 
|  | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | # this field represents the confidence that there is a tower in the query | 
|  | # image. Range [0, 1]. | 
|  | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | # `description` is expressed. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | # for `LABEL_DETECTION` features. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
|  | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | # detected distant towering building, even though the confidence that | 
|  | # there is a tower in each image may be the same. Range [0, 1]. | 
|  | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | # fields, such a score or string that qualifies the entity. | 
|  | { # A `Property` consists of a user-supplied name/value pair. | 
|  | "value": "A String", # Value of the property. | 
|  | "uint64Value": "A String", # Value of numeric properties. | 
|  | "name": "A String", # Name of the property. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "productSearchResults": { # Results for a product search request. # If present, product search has completed successfully. | 
|  | "productGroupedResults": [ # List of results grouped by products detected in the query image. Each entry | 
|  | # corresponds to one bounding polygon in the query image, and contains the | 
|  | # matching products specific to that region. There may be duplicate product | 
|  | # matches in the union of all the per-product results. | 
|  | { # Information about the products similar to a single product in a query | 
|  | # image. | 
|  | "objectAnnotations": [ # List of generic predictions for the object in the bounding box. | 
|  | { # Prediction for what the object in the bounding box is. | 
|  | "score": 3.14, # Score of the result. Range [0, 1]. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "mid": "A String", # Object ID that should align with EntityAnnotation mid. | 
|  | "name": "A String", # Object name, expressed in its `language_code` language. | 
|  | }, | 
|  | ], | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the product detected in the query image. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "results": [ # List of results, one for each product match. | 
|  | { # Information about a product. | 
|  | "image": "A String", # The resource name of the image from the product that is the closest match | 
|  | # to the query. | 
|  | "product": { # A Product contains ReferenceImages. # The Product. | 
|  | "name": "A String", # The resource name of the product. | 
|  | # | 
|  | # Format is: | 
|  | # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`. | 
|  | # | 
|  | # This field is ignored when creating a product. | 
|  | "displayName": "A String", # The user-provided name for this Product. Must not be empty. Must be at most | 
|  | # 4096 characters long. | 
|  | "description": "A String", # User-provided metadata to be stored with this product. Must be at most 4096 | 
|  | # characters long. | 
|  | "productCategory": "A String", # Immutable. The category for the product identified by the reference image. This should | 
|  | # be either "homegoods-v2", "apparel-v2", or "toys-v2". The legacy categories | 
|  | # "homegoods", "apparel", and "toys" are still supported, but these should | 
|  | # not be used for new products. | 
|  | "productLabels": [ # Key-value pairs that can be attached to a product. At query time, | 
|  | # constraints can be specified based on the product_labels. | 
|  | # | 
|  | # Note that integer values can be provided as strings, e.g. "1199". Only | 
|  | # strings with integer values can match a range-based restriction which is | 
|  | # to be supported soon. | 
|  | # | 
|  | # Multiple values can be assigned to the same key. One product may have up to | 
|  | # 500 product_labels. | 
|  | # | 
|  | # Notice that the total number of distinct product_labels over all products | 
|  | # in one ProductSet cannot exceed 1M, otherwise the product search pipeline | 
|  | # will refuse to work for that ProductSet. | 
|  | { # A product label represented as a key-value pair. | 
|  | "value": "A String", # The value of the label attached to the product. Cannot be empty and | 
|  | # cannot exceed 128 bytes. | 
|  | "key": "A String", # The key of the label attached to the product. Cannot be empty and cannot | 
|  | # exceed 128 bytes. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "score": 3.14, # A confidence level on the match, ranging from 0 (no confidence) to | 
|  | # 1 (full confidence). | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "results": [ # List of results, one for each product match. | 
|  | { # Information about a product. | 
|  | "image": "A String", # The resource name of the image from the product that is the closest match | 
|  | # to the query. | 
|  | "product": { # A Product contains ReferenceImages. # The Product. | 
|  | "name": "A String", # The resource name of the product. | 
|  | # | 
|  | # Format is: | 
|  | # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`. | 
|  | # | 
|  | # This field is ignored when creating a product. | 
|  | "displayName": "A String", # The user-provided name for this Product. Must not be empty. Must be at most | 
|  | # 4096 characters long. | 
|  | "description": "A String", # User-provided metadata to be stored with this product. Must be at most 4096 | 
|  | # characters long. | 
|  | "productCategory": "A String", # Immutable. The category for the product identified by the reference image. This should | 
|  | # be either "homegoods-v2", "apparel-v2", or "toys-v2". The legacy categories | 
|  | # "homegoods", "apparel", and "toys" are still supported, but these should | 
|  | # not be used for new products. | 
|  | "productLabels": [ # Key-value pairs that can be attached to a product. At query time, | 
|  | # constraints can be specified based on the product_labels. | 
|  | # | 
|  | # Note that integer values can be provided as strings, e.g. "1199". Only | 
|  | # strings with integer values can match a range-based restriction which is | 
|  | # to be supported soon. | 
|  | # | 
|  | # Multiple values can be assigned to the same key. One product may have up to | 
|  | # 500 product_labels. | 
|  | # | 
|  | # Notice that the total number of distinct product_labels over all products | 
|  | # in one ProductSet cannot exceed 1M, otherwise the product search pipeline | 
|  | # will refuse to work for that ProductSet. | 
|  | { # A product label represented as a key-value pair. | 
|  | "value": "A String", # The value of the label attached to the product. Cannot be empty and | 
|  | # cannot exceed 128 bytes. | 
|  | "key": "A String", # The key of the label attached to the product. Cannot be empty and cannot | 
|  | # exceed 128 bytes. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "score": 3.14, # A confidence level on the match, ranging from 0 (no confidence) to | 
|  | # 1 (full confidence). | 
|  | }, | 
|  | ], | 
|  | "indexTime": "A String", # Timestamp of the index which provided these results. Products added to the | 
|  | # product set and products removed from the product set after this time are | 
|  | # not reflected in the current results. | 
|  | }, | 
|  | "localizedObjectAnnotations": [ # If present, localized object detection has completed successfully. | 
|  | # This will be sorted descending by confidence score. | 
|  | { # Set of detected objects with bounding boxes. | 
|  | "score": 3.14, # Score of the result. Range [0, 1]. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "mid": "A String", # Object ID that should align with EntityAnnotation mid. | 
|  | "name": "A String", # Object name, expressed in its `language_code` language. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this object belongs. This must be populated. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "error": { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the operation. | 
|  | # Note that filled-in image annotations are guaranteed to be | 
|  | # correct, even when `error` is set. | 
|  | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | # three pieces of data: error code, error message, and error details. | 
|  | # | 
|  | # You can find out more about this error model and how to work with it in the | 
|  | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
|  | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | # user-facing error message should be localized and sent in the | 
|  | # google.rpc.Status.details field, or localized by the client. | 
|  | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | # message types for APIs to use. | 
|  | { | 
|  | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "fullTextAnnotation": { # TextAnnotation contains a structured representation of OCR extracted text. # If present, text (OCR) detection or document (OCR) text detection has | 
|  | # completed successfully. | 
|  | # This annotation provides the structural hierarchy for the OCR detected | 
|  | # text. | 
|  | # The hierarchy of an OCR extracted text structure is like this: | 
|  | #     TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol | 
|  | # Each structural component, starting from Page, may further have their own | 
|  | # properties. Properties describe detected languages, breaks etc.. Please refer | 
|  | # to the TextAnnotation.TextProperty message definition below for more | 
|  | # detail. | 
|  | "pages": [ # List of pages detected by OCR. | 
|  | { # Detected page from OCR. | 
|  | "blocks": [ # List of blocks of text, images etc on this page. | 
|  | { # Logical element on the page. | 
|  | "property": { # Additional information detected on the structural component. # Additional information detected for the block. | 
|  | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | "type": "A String", # Detected break type. | 
|  | "isPrefix": True or False, # True if break prepends the element. | 
|  | }, | 
|  | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | { # Detected language for a structural component. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "blockType": "A String", # Detected block type (text, image etc) for this block. | 
|  | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the block. | 
|  | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | # is represented as around the top-left corner as defined when the text is | 
|  | # read in the 'natural' orientation. | 
|  | # For example: | 
|  | # | 
|  | # * when the text is horizontal it might look like: | 
|  | # | 
|  | #         0----1 | 
|  | #         |    | | 
|  | #         3----2 | 
|  | # | 
|  | # * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | # | 
|  | #         2----3 | 
|  | #         |    | | 
|  | #         1----0 | 
|  | # | 
|  | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "confidence": 3.14, # Confidence of the OCR results on the block. Range [0, 1]. | 
|  | "paragraphs": [ # List of paragraphs in this block (if this blocks is of type text). | 
|  | { # Structural unit of text representing a number of words in certain order. | 
|  | "property": { # Additional information detected on the structural component. # Additional information detected for the paragraph. | 
|  | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | "type": "A String", # Detected break type. | 
|  | "isPrefix": True or False, # True if break prepends the element. | 
|  | }, | 
|  | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | { # Detected language for a structural component. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the paragraph. | 
|  | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | # is represented as around the top-left corner as defined when the text is | 
|  | # read in the 'natural' orientation. | 
|  | # For example: | 
|  | #   * when the text is horizontal it might look like: | 
|  | #      0----1 | 
|  | #      |    | | 
|  | #      3----2 | 
|  | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | #      2----3 | 
|  | #      |    | | 
|  | #      1----0 | 
|  | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "confidence": 3.14, # Confidence of the OCR results for the paragraph. Range [0, 1]. | 
|  | "words": [ # List of all words in this paragraph. | 
|  | { # A word representation. | 
|  | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the word. | 
|  | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | # is represented as around the top-left corner as defined when the text is | 
|  | # read in the 'natural' orientation. | 
|  | # For example: | 
|  | #   * when the text is horizontal it might look like: | 
|  | #      0----1 | 
|  | #      |    | | 
|  | #      3----2 | 
|  | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | #      2----3 | 
|  | #      |    | | 
|  | #      1----0 | 
|  | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "confidence": 3.14, # Confidence of the OCR results for the word. Range [0, 1]. | 
|  | "symbols": [ # List of symbols in the word. | 
|  | # The order of the symbols follows the natural reading order. | 
|  | { # A single symbol representation. | 
|  | "property": { # Additional information detected on the structural component. # Additional information detected for the symbol. | 
|  | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | "type": "A String", # Detected break type. | 
|  | "isPrefix": True or False, # True if break prepends the element. | 
|  | }, | 
|  | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | { # Detected language for a structural component. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the symbol. | 
|  | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | # is represented as around the top-left corner as defined when the text is | 
|  | # read in the 'natural' orientation. | 
|  | # For example: | 
|  | #   * when the text is horizontal it might look like: | 
|  | #      0----1 | 
|  | #      |    | | 
|  | #      3----2 | 
|  | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | #      2----3 | 
|  | #      |    | | 
|  | #      1----0 | 
|  | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "confidence": 3.14, # Confidence of the OCR results for the symbol. Range [0, 1]. | 
|  | "text": "A String", # The actual UTF-8 representation of the symbol. | 
|  | }, | 
|  | ], | 
|  | "property": { # Additional information detected on the structural component. # Additional information detected for the word. | 
|  | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | "type": "A String", # Detected break type. | 
|  | "isPrefix": True or False, # True if break prepends the element. | 
|  | }, | 
|  | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | { # Detected language for a structural component. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "property": { # Additional information detected on the structural component. # Additional information detected on the page. | 
|  | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | "type": "A String", # Detected break type. | 
|  | "isPrefix": True or False, # True if break prepends the element. | 
|  | }, | 
|  | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | { # Detected language for a structural component. | 
|  | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | # information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "confidence": 3.14, # Confidence of the OCR results on the page. Range [0, 1]. | 
|  | "height": 42, # Page height. For PDFs the unit is points. For images (including | 
|  | # TIFFs) the unit is pixels. | 
|  | "width": 42, # Page width. For PDFs the unit is points. For images (including | 
|  | # TIFFs) the unit is pixels. | 
|  | }, | 
|  | ], | 
|  | "text": "A String", # UTF-8 text detected on the pages. | 
|  | }, | 
|  | "textAnnotations": [ # If present, text (OCR) detection has completed successfully. | 
|  | { # Set of detected entity features. | 
|  | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
|  | "locations": [ # The location information for the detected entity. Multiple | 
|  | # `LocationInfo` elements can be present because one location may | 
|  | # indicate the location of the scene in the image, and another location | 
|  | # may indicate the location of the place where the image was taken. | 
|  | # Location information is usually present for landmarks. | 
|  | { # Detected entity location information. | 
|  | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | # [Google Knowledge Graph Search | 
|  | # API](https://developers.google.com/knowledge-graph/). | 
|  | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | # The accuracy of the entity detection in an image. | 
|  | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | # this field represents the confidence that there is a tower in the query | 
|  | # image. Range [0, 1]. | 
|  | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | # `description` is expressed. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | # for `LABEL_DETECTION` features. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
|  | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | # detected distant towering building, even though the confidence that | 
|  | # there is a tower in each image may be the same. Range [0, 1]. | 
|  | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | # fields, such a score or string that qualifies the entity. | 
|  | { # A `Property` consists of a user-supplied name/value pair. | 
|  | "value": "A String", # Value of the property. | 
|  | "uint64Value": "A String", # Value of numeric properties. | 
|  | "name": "A String", # Name of the property. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "imagePropertiesAnnotation": { # Stores image properties, such as dominant colors. # If present, image properties were extracted successfully. | 
|  | "dominantColors": { # Set of dominant colors and their corresponding scores. # If present, dominant colors completed successfully. | 
|  | "colors": [ # RGB color values with their score and pixel fraction. | 
|  | { # Color information consists of RGB channels, score, and the fraction of | 
|  | # the image that the color occupies in the image. | 
|  | "score": 3.14, # Image-specific score for this color. Value in range [0, 1]. | 
|  | "pixelFraction": 3.14, # The fraction of pixels the color occupies in the image. | 
|  | # Value in range [0, 1]. | 
|  | "color": { # Represents a color in the RGBA color space. This representation is designed # RGB components of the color. | 
|  | # for simplicity of conversion to/from color representations in various | 
|  | # languages over compactness; for example, the fields of this representation | 
|  | # can be trivially provided to the constructor of "java.awt.Color" in Java; it | 
|  | # can also be trivially provided to UIColor's "+colorWithRed:green:blue:alpha" | 
|  | # method in iOS; and, with just a little work, it can be easily formatted into | 
|  | # a CSS "rgba()" string in JavaScript, as well. | 
|  | # | 
|  | # Note: this proto does not carry information about the absolute color space | 
|  | # that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, | 
|  | # DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color | 
|  | # space. | 
|  | # | 
|  | # Example (Java): | 
|  | # | 
|  | #      import com.google.type.Color; | 
|  | # | 
|  | #      // ... | 
|  | #      public static java.awt.Color fromProto(Color protocolor) { | 
|  | #        float alpha = protocolor.hasAlpha() | 
|  | #            ? protocolor.getAlpha().getValue() | 
|  | #            : 1.0; | 
|  | # | 
|  | #        return new java.awt.Color( | 
|  | #            protocolor.getRed(), | 
|  | #            protocolor.getGreen(), | 
|  | #            protocolor.getBlue(), | 
|  | #            alpha); | 
|  | #      } | 
|  | # | 
|  | #      public static Color toProto(java.awt.Color color) { | 
|  | #        float red = (float) color.getRed(); | 
|  | #        float green = (float) color.getGreen(); | 
|  | #        float blue = (float) color.getBlue(); | 
|  | #        float denominator = 255.0; | 
|  | #        Color.Builder resultBuilder = | 
|  | #            Color | 
|  | #                .newBuilder() | 
|  | #                .setRed(red / denominator) | 
|  | #                .setGreen(green / denominator) | 
|  | #                .setBlue(blue / denominator); | 
|  | #        int alpha = color.getAlpha(); | 
|  | #        if (alpha != 255) { | 
|  | #          result.setAlpha( | 
|  | #              FloatValue | 
|  | #                  .newBuilder() | 
|  | #                  .setValue(((float) alpha) / denominator) | 
|  | #                  .build()); | 
|  | #        } | 
|  | #        return resultBuilder.build(); | 
|  | #      } | 
|  | #      // ... | 
|  | # | 
|  | # Example (iOS / Obj-C): | 
|  | # | 
|  | #      // ... | 
|  | #      static UIColor* fromProto(Color* protocolor) { | 
|  | #         float red = [protocolor red]; | 
|  | #         float green = [protocolor green]; | 
|  | #         float blue = [protocolor blue]; | 
|  | #         FloatValue* alpha_wrapper = [protocolor alpha]; | 
|  | #         float alpha = 1.0; | 
|  | #         if (alpha_wrapper != nil) { | 
|  | #           alpha = [alpha_wrapper value]; | 
|  | #         } | 
|  | #         return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; | 
|  | #      } | 
|  | # | 
|  | #      static Color* toProto(UIColor* color) { | 
|  | #          CGFloat red, green, blue, alpha; | 
|  | #          if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { | 
|  | #            return nil; | 
|  | #          } | 
|  | #          Color* result = [[Color alloc] init]; | 
|  | #          [result setRed:red]; | 
|  | #          [result setGreen:green]; | 
|  | #          [result setBlue:blue]; | 
|  | #          if (alpha <= 0.9999) { | 
|  | #            [result setAlpha:floatWrapperWithValue(alpha)]; | 
|  | #          } | 
|  | #          [result autorelease]; | 
|  | #          return result; | 
|  | #     } | 
|  | #     // ... | 
|  | # | 
|  | #  Example (JavaScript): | 
|  | # | 
|  | #     // ... | 
|  | # | 
|  | #     var protoToCssColor = function(rgb_color) { | 
|  | #        var redFrac = rgb_color.red || 0.0; | 
|  | #        var greenFrac = rgb_color.green || 0.0; | 
|  | #        var blueFrac = rgb_color.blue || 0.0; | 
|  | #        var red = Math.floor(redFrac * 255); | 
|  | #        var green = Math.floor(greenFrac * 255); | 
|  | #        var blue = Math.floor(blueFrac * 255); | 
|  | # | 
|  | #        if (!('alpha' in rgb_color)) { | 
|  | #           return rgbToCssColor_(red, green, blue); | 
|  | #        } | 
|  | # | 
|  | #        var alphaFrac = rgb_color.alpha.value || 0.0; | 
|  | #        var rgbParams = [red, green, blue].join(','); | 
|  | #        return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); | 
|  | #     }; | 
|  | # | 
|  | #     var rgbToCssColor_ = function(red, green, blue) { | 
|  | #       var rgbNumber = new Number((red << 16) | (green << 8) | blue); | 
|  | #       var hexString = rgbNumber.toString(16); | 
|  | #       var missingZeros = 6 - hexString.length; | 
|  | #       var resultBuilder = ['#']; | 
|  | #       for (var i = 0; i < missingZeros; i++) { | 
|  | #          resultBuilder.push('0'); | 
|  | #       } | 
|  | #       resultBuilder.push(hexString); | 
|  | #       return resultBuilder.join(''); | 
|  | #     }; | 
|  | # | 
|  | #     // ... | 
|  | "red": 3.14, # The amount of red in the color as a value in the interval [0, 1]. | 
|  | "green": 3.14, # The amount of green in the color as a value in the interval [0, 1]. | 
|  | "blue": 3.14, # The amount of blue in the color as a value in the interval [0, 1]. | 
|  | "alpha": 3.14, # The fraction of this color that should be applied to the pixel. That is, | 
|  | # the final pixel color is defined by the equation: | 
|  | # | 
|  | #   pixel color = alpha * (this color) + (1.0 - alpha) * (background color) | 
|  | # | 
|  | # This means that a value of 1.0 corresponds to a solid color, whereas | 
|  | # a value of 0.0 corresponds to a completely transparent color. This | 
|  | # uses a wrapper message rather than a simple float scalar so that it is | 
|  | # possible to distinguish between a default value and the value being unset. | 
|  | # If omitted, this color object is to be rendered as a solid color | 
|  | # (as if the alpha value had been explicitly given with a value of 1.0). | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | "logoAnnotations": [ # If present, logo detection has completed successfully. | 
|  | { # Set of detected entity features. | 
|  | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
|  | "locations": [ # The location information for the detected entity. Multiple | 
|  | # `LocationInfo` elements can be present because one location may | 
|  | # indicate the location of the scene in the image, and another location | 
|  | # may indicate the location of the place where the image was taken. | 
|  | # Location information is usually present for landmarks. | 
|  | { # Detected entity location information. | 
|  | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | # [Google Knowledge Graph Search | 
|  | # API](https://developers.google.com/knowledge-graph/). | 
|  | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | # The accuracy of the entity detection in an image. | 
|  | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | # this field represents the confidence that there is a tower in the query | 
|  | # image. Range [0, 1]. | 
|  | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | # `description` is expressed. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | # for `LABEL_DETECTION` features. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
|  | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | # detected distant towering building, even though the confidence that | 
|  | # there is a tower in each image may be the same. Range [0, 1]. | 
|  | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | # fields, such a score or string that qualifies the entity. | 
|  | { # A `Property` consists of a user-supplied name/value pair. | 
|  | "value": "A String", # Value of the property. | 
|  | "uint64Value": "A String", # Value of numeric properties. | 
|  | "name": "A String", # Name of the property. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "context": { # If an image was produced from a file (e.g. a PDF), this message gives # If present, contextual information is needed to understand where this image | 
|  | # comes from. | 
|  | # information about the source of that image. | 
|  | "uri": "A String", # The URI of the file used to produce the image. | 
|  | "pageNumber": 42, # If the file was a PDF or TIFF, this field gives the page number within | 
|  | # the file used to produce the image. | 
|  | }, | 
|  | "webDetection": { # Relevant information for the image from the Internet. # If present, web detection has completed successfully. | 
|  | "visuallySimilarImages": [ # The visually similar image results. | 
|  | { # Metadata for online images. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | "url": "A String", # The result image URL. | 
|  | }, | 
|  | ], | 
|  | "bestGuessLabels": [ # The service's best guess as to the topic of the request image. | 
|  | # Inferred from similar images on the open web. | 
|  | { # Label to provide extra metadata for the web detection. | 
|  | "label": "A String", # Label for extra metadata. | 
|  | "languageCode": "A String", # The BCP-47 language code for `label`, such as "en-US" or "sr-Latn". | 
|  | # For more information, see | 
|  | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | }, | 
|  | ], | 
|  | "fullMatchingImages": [ # Fully matching images from the Internet. | 
|  | # Can include resized copies of the query image. | 
|  | { # Metadata for online images. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | "url": "A String", # The result image URL. | 
|  | }, | 
|  | ], | 
|  | "webEntities": [ # Deduced entities from similar images on the Internet. | 
|  | { # Entity deduced from similar images on the Internet. | 
|  | "entityId": "A String", # Opaque entity ID. | 
|  | "description": "A String", # Canonical description of the entity, in English. | 
|  | "score": 3.14, # Overall relevancy score for the entity. | 
|  | # Not normalized and not comparable across different image queries. | 
|  | }, | 
|  | ], | 
|  | "pagesWithMatchingImages": [ # Web pages containing the matching images from the Internet. | 
|  | { # Metadata for web pages. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the web page. | 
|  | "partialMatchingImages": [ # Partial matching images on the page. | 
|  | # Those images are similar enough to share some key-point features. For | 
|  | # example an original image will likely have partial matching for its | 
|  | # crops. | 
|  | { # Metadata for online images. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | "url": "A String", # The result image URL. | 
|  | }, | 
|  | ], | 
|  | "url": "A String", # The result web page URL. | 
|  | "pageTitle": "A String", # Title for the web page, may contain HTML markups. | 
|  | "fullMatchingImages": [ # Fully matching images on the page. | 
|  | # Can include resized copies of the query image. | 
|  | { # Metadata for online images. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | "url": "A String", # The result image URL. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | ], | 
|  | "partialMatchingImages": [ # Partial matching images from the Internet. | 
|  | # Those images are similar enough to share some key-point features. For | 
|  | # example an original image will likely have partial matching for its crops. | 
|  | { # Metadata for online images. | 
|  | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | "url": "A String", # The result image URL. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "safeSearchAnnotation": { # Set of features pertaining to the image, computed by computer vision # If present, safe-search annotation has completed successfully. | 
|  | # methods over safe-search verticals (for example, adult, spoof, medical, | 
|  | # violence). | 
|  | "adult": "A String", # Represents the adult content likelihood for the image. Adult content may | 
|  | # contain elements such as nudity, pornographic images or cartoons, or | 
|  | # sexual activities. | 
|  | "spoof": "A String", # Spoof likelihood. The likelihood that an modification | 
|  | # was made to the image's canonical version to make it appear | 
|  | # funny or offensive. | 
|  | "medical": "A String", # Likelihood that this is a medical image. | 
|  | "racy": "A String", # Likelihood that the request image contains racy content. Racy content may | 
|  | # include (but is not limited to) skimpy or sheer clothing, strategically | 
|  | # covered nudity, lewd or provocative poses, or close-ups of sensitive | 
|  | # body areas. | 
|  | "violence": "A String", # Likelihood that this image contains violent content. | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "inputConfig": { # The desired input location and metadata. # Information about the file for which this response is generated. | 
|  | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | # "image/gif" are supported. Wildcards are not supported. | 
|  | "content": "A String", # File content, represented as a stream of bytes. | 
|  | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | # representation, whereas JSON representations use base64. | 
|  | # | 
|  | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | # not work for AsyncBatchAnnotateFiles requests. | 
|  | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | }, | 
|  | }, | 
|  | "totalPages": 42, # This field gives the total number of pages in the file. | 
|  | "error": { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the failed request. The | 
|  | # `responses` field will not be set in this case. | 
|  | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | # three pieces of data: error code, error message, and error details. | 
|  | # | 
|  | # You can find out more about this error model and how to work with it in the | 
|  | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
|  | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | # user-facing error message should be localized and sent in the | 
|  | # google.rpc.Status.details field, or localized by the client. | 
|  | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | # message types for APIs to use. | 
|  | { | 
|  | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | }</pre> | 
|  | </div> | 
|  |  | 
|  | <div class="method"> | 
|  | <code class="details" id="asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</code> | 
|  | <pre>Run asynchronous image detection and annotation for a list of generic | 
|  | files, such as PDF files, which may contain multiple pages and multiple | 
|  | images per page. Progress and results can be retrieved through the | 
|  | `google.longrunning.Operations` interface. | 
|  | `Operation.metadata` contains `OperationMetadata` (metadata). | 
|  | `Operation.response` contains `AsyncBatchAnnotateFilesResponse` (results). | 
|  |  | 
|  | Args: | 
|  | parent: string, Optional. Target project and location to make a call. | 
|  |  | 
|  | Format: `projects/{project-id}/locations/{location-id}`. | 
|  |  | 
|  | If no parent is specified, a region will be chosen automatically. | 
|  |  | 
|  | Supported location-ids: | 
|  | `us`: USA country only, | 
|  | `asia`: East asia areas, like Japan, Taiwan, | 
|  | `eu`: The European Union. | 
|  |  | 
|  | Example: `projects/project-A/locations/eu`. (required) | 
|  | body: object, The request body. | 
|  | The object takes the form of: | 
|  |  | 
|  | { # Multiple async file annotation requests are batched into a single service | 
|  | # call. | 
|  | "requests": [ # Required. Individual async file annotation requests for this batch. | 
|  | { # An offline file annotation request. | 
|  | "inputConfig": { # The desired input location and metadata. # Required. Information about the input file. | 
|  | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | # "image/gif" are supported. Wildcards are not supported. | 
|  | "content": "A String", # File content, represented as a stream of bytes. | 
|  | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | # representation, whereas JSON representations use base64. | 
|  | # | 
|  | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | # not work for AsyncBatchAnnotateFiles requests. | 
|  | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | }, | 
|  | }, | 
|  | "features": [ # Required. Requested features. | 
|  | { # The type of Google Cloud Vision API detection to perform, and the maximum | 
|  | # number of results to return for that type. Multiple `Feature` objects can | 
|  | # be specified in the `features` list. | 
|  | "type": "A String", # The feature type. | 
|  | "maxResults": 42, # Maximum number of results of this type. Does not apply to | 
|  | # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`. | 
|  | "model": "A String", # Model to use for the feature. | 
|  | # Supported values: "builtin/stable" (the default if unset) and | 
|  | # "builtin/latest". | 
|  | }, | 
|  | ], | 
|  | "imageContext": { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file. | 
|  | "languageHints": [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value | 
|  | # yields the best results since it enables automatic language detection. For | 
|  | # languages based on the Latin alphabet, setting `language_hints` is not | 
|  | # needed. In rare cases, when the language of the text in the image is known, | 
|  | # setting a hint will help get better results (although it will be a | 
|  | # significant hindrance if the hint is wrong). Text detection returns an | 
|  | # error if one or more of the specified languages is not one of the | 
|  | # [supported languages](https://cloud.google.com/vision/docs/languages). | 
|  | "A String", | 
|  | ], | 
|  | "webDetectionParams": { # Parameters for web detection request. # Parameters for web detection. | 
|  | "includeGeoResults": True or False, # Whether to include results derived from the geo information in the image. | 
|  | }, | 
|  | "latLongRect": { # Rectangle determined by min and max `LatLng` pairs. # Not used. | 
|  | "maxLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | "minLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair. | 
|  | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | # specified otherwise, this must conform to the | 
|  | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | # standard</a>. Values must be within normalized ranges. | 
|  | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | }, | 
|  | }, | 
|  | "cropHintsParams": { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request. | 
|  | "aspectRatios": [ # Aspect ratios in floats, representing the ratio of the width to the height | 
|  | # of the image. For example, if the desired aspect ratio is 4/3, the | 
|  | # corresponding float value should be 1.33333.  If not specified, the | 
|  | # best possible crop is returned. The number of provided aspect ratios is | 
|  | # limited to a maximum of 16; any aspect ratios provided after the 16th are | 
|  | # ignored. | 
|  | 3.14, | 
|  | ], | 
|  | }, | 
|  | "productSearchParams": { # Parameters for a product search request. # Parameters for product search. | 
|  | "filter": "A String", # The filtering expression. This can be used to restrict search results based | 
|  | # on Product labels. We currently support an AND of OR of key-value | 
|  | # expressions, where each expression within an OR must have the same key. An | 
|  | # '=' should be used to connect the key and value. | 
|  | # | 
|  | # For example, "(color = red OR color = blue) AND brand = Google" is | 
|  | # acceptable, but "(color = red OR brand = Google)" is not acceptable. | 
|  | # "color: red" is not acceptable because it uses a ':' instead of an '='. | 
|  | "productSet": "A String", # The resource name of a ProductSet to be searched for similar images. | 
|  | # | 
|  | # Format is: | 
|  | # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`. | 
|  | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image. | 
|  | # If it is not specified, system discretion will be applied. | 
|  | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | # and range from 0 to 1. | 
|  | "y": 3.14, # Y coordinate. | 
|  | "x": 3.14, # X coordinate. | 
|  | }, | 
|  | ], | 
|  | "vertices": [ # The bounding polygon vertices. | 
|  | { # A vertex represents a 2D point in the image. | 
|  | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | "x": 42, # X coordinate. | 
|  | "y": 42, # Y coordinate. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "productCategories": [ # The list of product categories to search in. Currently, we only consider | 
|  | # the first category, and either "homegoods-v2", "apparel-v2", "toys-v2", | 
|  | # "packagedgoods-v1", or "general-v1" should be specified. The legacy | 
|  | # categories "homegoods", "apparel", and "toys" are still supported but will | 
|  | # be deprecated. For new products, please use "homegoods-v2", "apparel-v2", | 
|  | # or "toys-v2" for better product search accuracy. It is recommended to | 
|  | # migrate existing products to these categories as well. | 
|  | "A String", | 
|  | ], | 
|  | }, | 
|  | }, | 
|  | "outputConfig": { # The desired output location and metadata. # Required. The desired output location and metadata (e.g. format). | 
|  | "batchSize": 42, # The max number of response protos to put into each output JSON file on | 
|  | # Google Cloud Storage. | 
|  | # The valid range is [1, 100]. If not specified, the default value is 20. | 
|  | # | 
|  | # For example, for one pdf file with 100 pages, 100 response protos will | 
|  | # be generated. If `batch_size` = 20, then 5 json files each | 
|  | # containing 20 response protos will be written under the prefix | 
|  | # `gcs_destination`.`uri`. | 
|  | # | 
|  | # Currently, batch_size only applies to GcsDestination, with potential future | 
|  | # support for other output configurations. | 
|  | "gcsDestination": { # The Google Cloud Storage location where the output will be written to. # The Google Cloud Storage location to write the output(s) to. | 
|  | "uri": "A String", # Google Cloud Storage URI prefix where the results will be stored. Results | 
|  | # will be in JSON format and preceded by its corresponding input URI prefix. | 
|  | # This field can either represent a gcs file prefix or gcs directory. In | 
|  | # either case, the uri should be unique because in order to get all of the | 
|  | # output files, you will need to do a wildcard gcs search on the uri prefix | 
|  | # you provide. | 
|  | # | 
|  | # Examples: | 
|  | # | 
|  | # *    File Prefix: gs://bucket-name/here/filenameprefix   The output files | 
|  | # will be created in gs://bucket-name/here/ and the names of the | 
|  | # output files will begin with "filenameprefix". | 
|  | # | 
|  | # *    Directory Prefix: gs://bucket-name/some/location/   The output files | 
|  | # will be created in gs://bucket-name/some/location/ and the names of the | 
|  | # output files could be anything because there was no filename prefix | 
|  | # specified. | 
|  | # | 
|  | # If multiple outputs, each response is still AnnotateFileResponse, each of | 
|  | # which contains some subset of the full list of AnnotateImageResponse. | 
|  | # Multiple outputs can happen if, for example, the output JSON is too large | 
|  | # and overflows into multiple sharded files. | 
|  | }, | 
|  | }, | 
|  | }, | 
|  | ], | 
|  | "parent": "A String", # Optional. Target project and location to make a call. | 
|  | # | 
|  | # Format: `projects/{project-id}/locations/{location-id}`. | 
|  | # | 
|  | # If no parent is specified, a region will be chosen automatically. | 
|  | # | 
|  | # Supported location-ids: | 
|  | #     `us`: USA country only, | 
|  | #     `asia`: East asia areas, like Japan, Taiwan, | 
|  | #     `eu`: The European Union. | 
|  | # | 
|  | # Example: `projects/project-A/locations/eu`. | 
|  | } | 
|  |  | 
|  | x__xgafv: string, V1 error format. | 
|  | Allowed values | 
|  | 1 - v1 error format | 
|  | 2 - v2 error format | 
|  |  | 
|  | Returns: | 
|  | An object of the form: | 
|  |  | 
|  | { # This resource represents a long-running operation that is the result of a | 
|  | # network API call. | 
|  | "done": True or False, # If the value is `false`, it means the operation is still in progress. | 
|  | # If `true`, the operation is completed, and either `error` or `response` is | 
|  | # available. | 
|  | "response": { # The normal response of the operation in case of success.  If the original | 
|  | # method returns no data on success, such as `Delete`, the response is | 
|  | # `google.protobuf.Empty`.  If the original method is standard | 
|  | # `Get`/`Create`/`Update`, the response should be the resource.  For other | 
|  | # methods, the response should have the type `XxxResponse`, where `Xxx` | 
|  | # is the original method name.  For example, if the original method name | 
|  | # is `TakeSnapshot()`, the inferred response type is | 
|  | # `TakeSnapshotResponse`. | 
|  | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | }, | 
|  | "name": "A String", # The server-assigned name, which is only unique within the same service that | 
|  | # originally returns it. If you use the default HTTP mapping, the | 
|  | # `name` should be a resource name ending with `operations/{unique_id}`. | 
|  | "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. | 
|  | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | # three pieces of data: error code, error message, and error details. | 
|  | # | 
|  | # You can find out more about this error model and how to work with it in the | 
|  | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
|  | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | # user-facing error message should be localized and sent in the | 
|  | # google.rpc.Status.details field, or localized by the client. | 
|  | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | # message types for APIs to use. | 
|  | { | 
|  | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | }, | 
|  | ], | 
|  | }, | 
|  | "metadata": { # Service-specific metadata associated with the operation.  It typically | 
|  | # contains progress information and common metadata such as create time. | 
|  | # Some services might not provide such metadata.  Any method that returns a | 
|  | # long-running operation should document the metadata type, if any. | 
|  | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | }, | 
|  | }</pre> | 
|  | </div> | 
|  |  | 
|  | </body></html> |