DEV Community

Suji Matts
Suji Matts

Posted on

Analyze Images with the GCP Cloud Vision API

Overview

This article explores usage of multiple Google Cloud ML APIs, including the Cloud Vision API, Translation API, and Natural Language API. You’ll start by using Optical Character Recognition (OCR) to extract text from an image, then translate that text, and finally analyze it for deeper insights.

Key Steps

Creating an API Key: Begin by generating an API key to authenticate your requests to the Vision API.

Uploading an Image: You will create a Cloud Storage bucket to store your images and upload a sample image for text extraction.

Making a Vision API Request: Using curl, you'll construct a request to the Vision API to perform text detection on the uploaded image.

Translating the Text: With the extracted text, you’ll use the Translation API to convert it from French to English.

Analyzing the Text: Finally, leverage the Natural Language API to extract entities and analyze the translated text for additional insights.

1. Creating an API Key

To get started, you need an API key to authenticate your requests to the Vision API. Here’s how you do it:

Navigate to the API Credentials Sectio
n: In the Google Cloud Console, go to APIs & Services and then Credentials.

Generate an API Key
: Click on Create Credentials and select API Key from the dropdown menu. This key will allow you to make secure requests to the Vision API.

Store the API Key
: After generating the key, copy it and store it as an environment variable in your Cloud Shell. This simplifies the process of including the key in your requests, making it easier to manage.

2. Uploading an Image

To process an image with the Vision API, you need to upload it to a Cloud Storage bucket:

Create a Cloud Storage Bucket
Navigate to the Cloud Storage browser in the Google Cloud Console and click Create Bucket. Choose a unique name for your bucket and configure access settings.

Set Access Permissions
Uncheck the box for Enforce public access prevention and select Fine-grained access control.

Upload the Image
After creating the bucket, you can upload your sample image (e.g.https://cdn.qwiklabs.com/cBoI5P4dZ6k%2FAr5Mv7eME%2F0fCb4G6nIGB0odCXzpEa4%3D). Ensure that the image has public access so the Vision API can retrieve it for processing.
Save this image as sign.jpg

3. Create your Cloud Vision API request

create a file with ocr-request.json below content

{
  "requests": [
      {
        "image": {
          "source": {
              "gcsImageUri": "gs://my-bucket-name/sign.jpg"
          }
        },
        "features": [
          {
            "type": "TEXT_DETECTION",
            "maxResults": 10
          }
        ]
      }
  ]
}
Enter fullscreen mode Exit fullscreen mode

You're going to use the TEXT_DETECTION feature of the Cloud Vision API. This will run optical character recognition (OCR) on the image to extract text.

4. Call the text detection method

curl -s -X POST -H "Content-Type: application/json" --data-binary @ocr-request.json  https://vision.googleapis.com/v1/images:annotate?key=${API_KEY}
Enter fullscreen mode Exit fullscreen mode

The first part of your response should look like the following:

{
  "responses": [
    {
      "textAnnotations": [
        {
          "locale": "fr",
          "description": "LE BIEN PUBLIC\nles dépêches\nPour Obama,\nla moutarde\nest\nde Dijon\n",
          "boundingPoly": {
            "vertices": [
              {
                "x": 138,
                "y": 40
              },
              {
                "x": 622,
                "y": 40
              },
              {
                "x": 622,
                "y": 795
              },
              {
                "x": 138,
                "y": 795
              }
            ]
          }
        },
        {
          "description": "LE",
          "boundingPoly": {
            "vertices": [
              {
                "x": 138,
                "y": 99
              },
              {
                "x": 274,
                "y": 82
              },
              {
                "x": 283,
                "y": 157
              },
              {
                "x": 147,
                "y": 173
              }
            ]
          }
        },
        {
          "description": "BIEN",
          "boundingPoly": {
            "vertices": [
              {
                "x": 291,
                "y": 79
              },
              {
                "x": 413,
                "y": 64
              },
              {
                "x": 422,
                "y": 139
              },
              {
                "x": 300,
                "y": 154
              }
            ]
          }
            ...
      ]
}]
}

Enter fullscreen mode Exit fullscreen mode

The OCR method is able to extract lots of text from the image.

The first piece of data you get back from textAnnotations is the entire block of text the API found in the image. This includes:

the language code (in this case fr for French)
a string of the text
a bounding box indicating where the text was found in the image
Then you get an object for each word found in the text with a bounding box for that specific word.

Run the following curl command to save the response to an ocr-response.json file so it can be referenced later:

curl -s -X POST -H "Content-Type: application/json" --data-binary @ocr-request.json  https://vision.googleapis.com/v1/images:annotate?key=${API_KEY} -o ocr-response.json
Enter fullscreen mode Exit fullscreen mode

5. Send text from the image to the Translation API

The Translation API can translate text into 100+ languages. It can also detect the language of the input text. To translate the French text into English, pass the text and the language code for the target language (en-US) to the Translation API.

First, create a translation-request.json file and add the following to it:

{
  "q": "your_text_here",
  "target": "en"
}
Enter fullscreen mode Exit fullscreen mode

Run this Bash command in Cloud Shell to extract the image text from the previous step and copy it into a new translation-request.json (all in one command):

STR=$(jq .responses[0].textAnnotations[0].description ocr-response.json) && STR="${STR//\"}" && sed -i "s|your_text_here|$STR|g" translation-request.json
Enter fullscreen mode Exit fullscreen mode

Now you're ready to call the Translation API. This command will also copy the response into a translation-response.json file:

curl -s -X POST -H "Content-Type: application/json" --data-binary @translation-request.json https://translation.googleapis.com/language/translate/v2?key=${API_KEY} -o translation-response.json
Enter fullscreen mode Exit fullscreen mode

Run this command to inspect the file with the Translation API response

cat translation-response.json

Enter fullscreen mode Exit fullscreen mode

Now you can understand more of what the sign said!

{
  "data": {
    "translations": [
      {
        "translatedText": "TO THE PUBLIC GOOD the dispatches For Obama, the mustard is from Dijon",
        "detectedSourceLanguage": "fr"
      }
    ]
  }
}

Enter fullscreen mode Exit fullscreen mode

6. Analyzing the image's text with the Natural Language API

The Natural Language API helps you understand text by extracting entities, analyzing sentiment and syntax, and classifying text into categories. Use the analyzeEntities method to see what entities the Natural Language API can find in the text from your image.

To set up the API request, create a nl-request.json file with the following

{
  "document":{
    "type":"PLAIN_TEXT",
    "content":"your_text_here"
  },
  "encodingType":"UTF8"
}
Enter fullscreen mode Exit fullscreen mode
STR=$(jq .data.translations[0].translatedText  translation-response.json) && STR="${STR//\"}" && sed -i "s|your_text_here|$STR|g" nl-request.json
Enter fullscreen mode Exit fullscreen mode
Call the `analyzeEntities `endpoint of the Natural Language API with this curl request:
curl "https://language.googleapis.com/v1/documents:analyzeEntities?key=${API_KEY}" \
  -s -X POST -H "Content-Type: application/json" --data-binary @nl-request.json
Enter fullscreen mode Exit fullscreen mode

If you scroll through the response you can see the entities the Natural Language API found:

{
  "entities": [
    {
      "name": "dispatches",
      "type": "OTHER",
      "metadata": {},
      "salience": 0.3560996,
      "mentions": [
        {
          "text": {
            "content": "dispatches",
            "beginOffset": 23
          },
          "type": "COMMON"
        }
      ]
    },
    {
      "name": "mustard",
      "type": "OTHER",
      "metadata": {},
      "salience": 0.2878307,
      "mentions": [
        {
          "text": {
            "content": "mustard",
            "beginOffset": 38
          },
          "type": "COMMON"
        }
      ]
    },
    {
      "name": "Obama",
      "type": "PERSON",
      "metadata": {
        "mid": "/m/02mjmr",
        "wikipedia_url": "https://en.wikipedia.org/wiki/Barack_Obama"
      },
      "salience": 0.16260329,
      "mentions": [
        {
          "text": {
            "content": "Obama",
            "beginOffset": 31
          },
          "type": "PROPER"
        }
      ]
    },
    {
      "name": "Dijon",
      "type": "LOCATION",
      "metadata": {
        "mid": "/m/0pbhz",
        "wikipedia_url": "https://en.wikipedia.org/wiki/Dijon"
      },
      "salience": 0.08129317,
      "mentions": [
        {
          "text": {
            "content": "Dijon",
            "beginOffset": 54
          },
          "type": "PROPER"
        }
      ]
    }
  ],
  "language": "en"
}
Enter fullscreen mode Exit fullscreen mode

For entities that have a wikipedia page, the API provides metadata including the URL of that page along with the entity's mid. The mid is an ID that maps to this entity in Google's Knowledge Graph. To get more information on it, you could call the Knowledge Graph API, passing it this ID. For all entities, the Natural Language API tells us the places it appeared in the text (mentions), the type of entity, and salience (a [0,1] range indicating how important the entity is to the text as a whole). In addition to English, the Natural Language API also supports the languages listed in the Language Support reference.

Top comments (0)