DEV Community

Cover image for Getting started using Google APIs: API Keys (2/2)
Wesley Chun (@wescpy) for Google Workspace Developers

Posted on • Edited on

Getting started using Google APIs: API Keys (2/2)

Introduction

As we near the American holiday of Thanksgiving, I'm hoping there's something to be thankful for in today's content, like stuff that's not documented well or at all by Google, stuff that seldom gets the light of day like using API keys or using both platform as well as product client libraries, and use of a seemingly random collection of Google APIs in a similar, somewhat consistent way. Welcome to the blog where I show you how to code APIs with Python and sometimes Node.js. If you're already familiar with using API keys, you can jump straight to the API samples:

Review the previous post kicking off this short 2-part series on using API keys to access Google APIs. Now that you know what they are, this post explores where you can use them. Before going there, let's quickly review credentials types: Google APIs support 3 different types, summarized in the table below.

Credential type Data accessed by API Example API families
API keys Public data Google Maps
OAuth client IDs Data owned by (human) users Google Workspace ("GWS")
Service accounts Data owned by apps/projects Google Cloud ("GCP")
Required credential type determined by API data "owners"

 

The first series is all about OAuth client IDs, the predominant credential type used with Google Workspace (GWS) APIs. GWS APIs aren't the only APIs supporting OAuth client IDs, but they make up the vast majority because there are so many. (Yep, it's not just Docs, Sheets, and Slides... the list is long.) In a similar vein, OAuth client IDs aren't the only credentials accepted by GWS APIs either. You'll see one example in this post using an API key.

API keys are the primary (and only) credentials type accepted by Google Maps APIs, or at least that's the case at the time of this writing. But just like with OAuth client IDs, there are other Google APIs that accept API keys.

API key management

As covered in the previous post, API keys are unique strings allowing applications to access APIs for which API keys are an (not necessarily "the") accepted credential type. API keys ensure access only by legitimate/registered users and help track API usage on a per-user, per-app, or per-project basis, helping keep that usage within established quotas and provide accurate billing for API usage/services.

Of the three supported credentials types, API keys are the most-straightforward to use and require the least amount of time to create, implement, and use. They're managed on the same credentials page as the others, as shown in the image below.

API keys section of credentials page

DevConsole: managing API keys on credentials page

 

The previous post went into some detail about how to create and protect API keys, but one of the key takeaways differentiating all the credentials types is that API keys are mainly used for APIs that access public data (locations in Google Maps, videos on YouTube, etc.). Soon you'll see how it works in practice with real code samples.

Using API keys

API keys are easy to "leak" or compromise, so best to not only use the restrictions presented to you when you create them but physically protect them as well. Don't code them in plain-text, don't check them into GitHub, etc. Store them in a secure database or use a service like GCP Secret Manager.

Of course this is a bit more work when you're only prototyping, so yes, you can paste them directly into source code while you're experimenting. You can switch things up when you're ready to commit your code. The base level of protection I'm using for the examples in this post is to put the API key as API_KEY in a file named settings.py. This means you'll only see my API_key import in the sample scripts. You need to have your own settings.py file in the same directory if you want to try these yourself.

Billing may be required: Some Google APIs require billing, even for free usage. For this post, those are GCP and Google Maps APIs. You must enable billing in the Cloud Console in order to run those corresponding code samples. Google Maps allows for $200USD in monthly usage for free while GCP has a free tier for certain products, including the sample below using the Natural Language API. In other words, running the sample scripts below should not incur billing so long as you stay under the free tier limits. Billing is not required to run the code samples using GWS and YouTube APIs.

Google Maps (Geocoding API)

All Google Maps APIs require API keys, and it's the only accepted credentials type. The example below demonstrates usage of the Geocoding API, looking up a (public) location on Google Maps, specifically finding the geolocation in latitude/longitude of a given street address:

'''
maps_geocode.py -- geocode address in Google Maps
'''
from __future__ import print_function
import googlemaps
from settings import API_KEY

ADDRESS = '1600 Amphitheatre Pkwy 94043'
GMAPS = googlemaps.Client(key=API_KEY)
res = GMAPS.geocode(ADDRESS)[0]
geo = res['geometry']['location']
print('** Geocode for %r: (%s, %s)' % (ADDRESS, geo['lat'], geo['lng']))
Enter fullscreen mode Exit fullscreen mode

The script begins by importing the print() function for Python 2 users. While most new projects use Python 3, there are still many using libraries dependent on Python 2, so this code sample is 2.x & 3.x compatible for that reason and to capture the widest audience possible. The next import is the Google Maps API client library, and finally, your API key is imported from settings.py.

The actual application code creates a constant for the address to geolocate followed by the initialization of the Maps API client. The real work happens on the following line, calling the Maps Geocoding API and grabs the results. The last line of the script displays the address and its geolocation. To run this code sample:

  1. Save maps_geocode.py script from above or the repo
  2. Enable billing
  3. Enable Google Maps Geocoding API
  4. Create an API key (and restrict it if possible)
  5. Save API key to settings.py as API_KEY='YOUR_API_KEY' (same folder)
  6. Install Google Maps Python API client library: pip install -U pip googlemaps # (or pip3)

When everything is set up, run it and see the Geocoding API do its thing:

$ python3 maps_geocode.py

** Geocode for address '1600 Amphitheatre Pkwy,
Mountain View, CA 94043, USA': (37.4226277,
-122.0841644)

$ python maps_geocode.py

** Geocode for address u'1600 Amphitheatre Pkwy,
Mountain View, CA 94043, USA': (37.4226277,
-122.0841644)
Enter fullscreen mode Exit fullscreen mode

Notice the Python 2 output renders the Unicode string directive u ahead of the address whereas it's absent for Python 3 since Unicode strings are the default.

Feel free to experiment with different addresses or other Maps APIs (there are many of them). This page in the documentation is intended to help you pick the right API for your use case(s). Since Maps API usage is billed, be sure to review the pricing page to ensure you don't incur charges unless intended. A more detailed pricing page can be found in the References section at the end of this post.

You now know how to geolocate a public place using a Maps API. If you'd like to continue exploring Maps, check out this post introducing the Maps APIs. Now let's explore how to display the contents of a public spreadsheet with the help of the Sheets API.

GWS (Google Sheets API)

Since you're already with GWS APIs from previous posts, let's look at how to use the Sheets API to display the contents of a public spreadsheet. In most cases of GWS API usage, OAuth client ID credentials are required because the data is owned by a (human) user, however if the Sheet is public, then an API key suffices. It is one of the few GWS APIs allowing for API key usage. The following script reads in the data from the public Sheet then displays it to the user:

'''
sheets_display.py — display rows of public spreadsheet
'''
from __future__ import print_function
from pprint import pprint
from googleapiclient import discovery
from settings import API_KEY

SHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
SHEETS = discovery.build('sheets', 'v4', developerKey=API_KEY)
print('** Printing student database...')
pprint(SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
        range='Class Data', fields='values').execute().get('values', []))
Enter fullscreen mode Exit fullscreen mode

This script performs similar imports at the top, the differences being the addition of Python's pretty-printer (pprint) module and swapping the Maps client library for the general Google APIs client library for Python, googleapiclient. Rather than a constant for an address, it will be the Drive file ID of the public spreadsheet. Note that this is the same spreadsheet featured in the Sheets API Python QuickStart.

Following this is the instantiation of the Sheets API client library followed by pretty-printing the student database spreadsheet. To run this script:

  1. Save sheets_display.py script from above or the repo
  2. Enable Google Sheets API
  3. Create an API key (if you haven't already)
  4. Save API key to settings.py as API_KEY='YOUR_API_KEY' (if you haven't already)
  5. Install Google APIs client library for Python: pip install -U pip google-api-python-client # (if you haven't already)

If you reuse the same API key from the Google Maps example, skip steps 3 & 4, and if you already installed the Google APIs client library for Python from the previous posts covering GWS APIs, skip this step too. The most important thing is to enable the Sheets API once you have the code. With these completed, you can run this script now:

$ python3 sheets_display.py
** Printing student database...
[['Student Name',
  'Gender',
  'Class Level',
  'Home State',
  'Major',
  'Extracurricular Activity'],
 ['Alexandra', 'Female', '4. Senior', 'CA', 'English', 'Drama Club'],
 ['Andrew', 'Male', '1. Freshman', 'SD', 'Math', 'Lacrosse'],
 ['Anna', 'Female', '1. Freshman', 'NC', 'English', 'Basketball'],
 ['Becky', 'Female', '2. Sophomore', 'SD', 'Art', 'Baseball'],
 ['Benjamin', 'Male', '4. Senior', 'WI', 'English', 'Basketball'],
    . . .
 ['Patrick', 'Male', '1. Freshman', 'NY', 'Art', 'Lacrosse'],
 ['Robert', 'Male', '1. Freshman', 'CA', 'English', 'Track & Field'],
 ['Sean', 'Male', '1. Freshman', 'NH', 'Physics', 'Track & Field'],
 ['Stacy', 'Female', '1. Freshman', 'NY', 'Math', 'Baseball'],
 ['Thomas', 'Male', '2. Sophomore', 'RI', 'Art', 'Lacrosse'],
 ['Will', 'Male', '4. Senior', 'FL', 'Math', 'Debate']]
Enter fullscreen mode Exit fullscreen mode

From the output, besides the rows I cut out for brevity, you can see the data is a student information database with the columns highlighted in the first row returned above. Running this script on Python 2 may show something unexpected:

$ python sheets_display.py
** Printing student database...
[[u'Student Name',
  u'Gender',
  u'Class Level',
  u'Home State',
  u'Major',
  u'Extracurricular Activity'],
 [u'Alexandra', u'Female', u'4. Senior', u'CA', u'English', u'Drama Club'],
 [u'Andrew', u'Male', u'1. Freshman', u'SD', u'Math', u'Lacrosse'],
 [u'Anna', u'Female', u'1. Freshman', u'NC', u'English', u'Basketball'],
 [u'Becky', u'Female', u'2. Sophomore', u'SD', u'Art', u'Baseball'],
 [u'Benjamin', u'Male', u'4. Senior', u'WI', u'English', u'Basketball'],
    . . .
Enter fullscreen mode Exit fullscreen mode

Strings coming from the API are in Unicode, so they look "normal" in Python 3 because Unicode is the default string type, but that's not the case with Python 2's default bytes strings. As a result, each string is prefixed with u in the output. Sheets API usage is "free" up to certain limits. Familiarize yourself on the usage limits page.

Armed with the ability to access public data using the Maps and GWS APIs, check out this video combining both technologies I made years ago showing developers how to access Google Maps from a spreadsheet!

Next up is using the YouTube Data API to search for (public) videos.

YouTube (YouTube Data API)

Like the Sheets API, YouTube APIs generally require OAuth client ID credentials because videos and their metadata as well as video playlists are ultimately owned by (human) users. However, if you're only querying for videos which have been published publicly, then using API keys is acceptable as the credentials type. The script below represents a barebones example of searching for YouTube videos.

Python

'''
yt_video_query.py — query for videos on YouTube
'''
from __future__ import print_function
from googleapiclient import discovery
from settings import API_KEY

QUERY = 'python -snake'
YOUTUBE = discovery.build('youtube', 'v3', developerKey=API_KEY)

print('\n** Searching for %r videos...' % QUERY)
res = YOUTUBE.search().list(q=QUERY, type='video',
        part='id,snippet').execute().get('items', [])
for item in res:
    print('http://youtu.be/%s\t%s' % (
            item['id']['videoId'], item['snippet']['title'][:48]))
Enter fullscreen mode Exit fullscreen mode

The YouTube APIs use the same API client library as the GWS APIs, so there are no real changes to the imports at the top. Rather than Python's pretty-printer, I'm going to format the strings myself in this script I originally wrote back in 2014. It's good to know Google APIs can maintain some level of consistency!

The script will search for all Python videos but ones without a snake. There's no guarantee the results will be coding videos, but the odds are certainly better. After the QUERY constant comes the instantiation of the YouTube Data API client. The query is executed, and the for-loop displays short links for each video along with the first forty-eight (48) characters of the video title. To run this script:

  1. Save yt_video_query.py script from above or the repo
  2. Enable YouTube Data API
  3. Create an API key (if you haven't already)
  4. Save API key to settings.py as API_KEY='YOUR_API_KEY' (if you haven't already)
  5. Install Google APIs client library for Python: pip install -U pip google-api-python-client # (if you haven't already)

The last three (3) steps are most likely optional if you ran either of the earlier scripts, so the key here is to enable the YouTube Data API. Once you're set up, run the script to see what's available:

$ python3 yt_video_query.py

** Searching for 'python -snake' videos...
http://youtu.be/rfscVS0vtbw     Learn Python - Full Course for Beginners [Tutori
http://youtu.be/_uQrJ0TkZlc     Python Tutorial - Python for Beginners [Full Cou
http://youtu.be/G2q6PUUDdRw     Crear un VIDEOJUEGO con PYTHON en 10 minutos | ¿
http://youtu.be/SjKEV8sDIAA     CS50 2021 - Lecture 6 - Python
http://youtu.be/t8pPdKYpowI     Python Tutorial for Beginners - Learn Python in
Enter fullscreen mode Exit fullscreen mode

For this script, the Python 2 output is the same as Python 3's, however while Python is great, let's not leave our Node.js friends behind and take a look at a JavaScript version.

Node.js

/*
yt_video_query.js — query for videos on YouTube
*/
require('dotenv').config();
const {google} = require('googleapis');
const QUERY = 'python -snake';
const YOUTUBE = google.youtube({version: 'v3', auth: process.env.API_KEY});

async function listVideos() {
  console.log(`\n** Searching for '${QUERY} videos...`);
  const vids = await YOUTUBE.search.list({part: 'id,snippet', q: QUERY, type: 'video'});
  vids.data.items.forEach(vid => {
      console.log(`http://youtu.be/${vid.id.videoId}\t${vid.snippet.title.substring(0, 48)}`);
  });
}

listVideos().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

Like it's Python sibling, the first part of the script sets things up, imports the Google APIs client library, creates a constant for the search QUERY, and instantiates the API client, this time with the help of the dotenv package to read API_KEY from .env. After initialization, the async function executes the query and displays the results to the end-user. The setup is nearly identical to Python:

  1. Save yt_video_query.js script from above or the repo
  2. Enable YouTube Data API
  3. Create an API key (if you haven't already)
  4. Save API key to .env as API_KEY=YOUR_API_KEY (like settings.py but without quotes)
  5. Install Google APIs client library for Node.js and dotenv: npm install dotenv googleapis

Python 2, 3, and Node.js versions all produce the same output. Running it on a different day may yield different results because videos are constantly uploaded or taken down daily.

YouTube Data API summary

Showing both Python and Node.js versions is a good exercise for those who have to straddle both worlds and have to use (non-GCP) Google APIs. To learn more about using the Data API, see this guide in the YouTube documentation. Like the Sheets API, usage of the YouTube Data API is "free" up to certain limits. Review the documentation on its quota system to understand how it works. The final example in this post "sends" public data (a sentence) to the GCP Natural Language API to understand the sentiment in the textual body.

GCP (Natural Language API)

Like Maps APIs, GCP APIs require billing to be enabled on your project. They are "pay-per-use," meaning you pay for what you use. Fortunately as described earlier, the Natural Language API has a free tier with which we can play around with it here. One of the features of the API is the ability to perform sentiment analysis on a piece of text. The API leverages its pre-trained machine learning (ML) model to infer whether a piece of text is positive or negative.

Unlike the other APIs we explored in this post, most use of GCP APIs requires service account credentials because most of the data accessed belongs to a GCP project or a "robot" user (rather than a human one). However, because we're sending a string literal to the API rather than reading it out of a (GCP) database, an API key suffices for this use case. Now let's take a look at this script.

Using the lower-level, platform client library

'''
nlp_sent_query.py — get sentiment from Natural Language API
'''
from __future__ import print_function
from googleapiclient import discovery
from settings import API_KEY

TEXT = '''Google, headquartered in Mountain View, unveiled the new
Android phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.'''
NLP = discovery.build('language', 'v1', developerKey=API_KEY)

print('TEXT:', TEXT)
data = {'type': 'PLAIN_TEXT', 'content': TEXT}
sent = NLP.documents().analyzeSentiment(
        body={'document': data}).execute().get('documentSentiment')
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
        sent['score'], sent['magnitude']))
Enter fullscreen mode Exit fullscreen mode

The imports are nearly identical to the others, so let's jump to TEXT; it's a constant pointing to the text we're passing to the API. That's followed by instantiating the Natural Language API client. TEXT is displayed to the end-user, and the document metadata data is constructed to tell the API this is plain-text content. The API is called and sentiment (sent) returned and displayed.

For sentiment analysis, the score is a real number ranging between -1.0 to 1.0. It is a positive value if the sentiment is positive, and vice versa for negative sentiment. The magnitude is a real value greater than 0.0 indicating how "strong" the emotion of the sentiment is. Learn more about score and magnitude in the API Basics documentation. Now that you know all that, here are the steps to running this example:

  1. Save nlp_sent_query.py script from above or the repo
  2. Enable Natural Language API
  3. Create an API key (if you haven't already)
  4. Save API key to settings.py as API_KEY='YOUR_API_KEY' (if you haven't already)
  5. Install Google APIs client library for Python: pip install -U pip google-api-python-client # (if you haven't already)

The last three (3) steps are most likely optional if you run either of the earlier scripts, so the key here is to enable the Natural Language API. Once you're set up, run the script:

$ python nlp_sent_query.py
TEXT: Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said in
his keynote that users love their new Android phones.

SENTIMENT: score (0.20), magnitude (0.50)
Enter fullscreen mode Exit fullscreen mode

For this body of text, the score (and sentiment) is positive (0.20) and can be considered a little "low" and closer to neutral (0.0). The reason for this is because there is only one word in the entire passage indicating it is positive. (I'll leave it as an exercise for the reader to determine which word it is.) However, the magnitude (0.5) indicates a "stronger" emotion than perhaps other words. For example, if you change that one word to "like", you'll likely see the magnitude decreasing because it is less "strong" than the original word that is there now. Now that you know how this script works, feel free to experiment with it.

I would not be doing my job if I ended it here. With most GCP APIs, it is always recommended to use a product's client library whereas the example above uses the lower-level platform client library where working at this lower-level allows developers to create more consistent applications across multiple Google APIs or API families. However, if you're only building on GCP without any other Google APIs, go with the Cloud client libraries, in this case, the Natural Language API client library.

Using the higher-level, product client library

The script is identical to the one above except for swapping client libraries:

'''
nlp_sent_query-gcp.py — get sentiment from Natural Language API
'''
from __future__ import print_function
from google.cloud import language_v1 as language
from settings import API_KEY

TEXT = '''Google, headquartered in Mountain View, unveiled the new
Android phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.'''
NLP = language.LanguageServiceClient(client_options={'api_key': API_KEY})

data = {'content': TEXT, 'type_': language.Document.Type.PLAIN_TEXT}}
print('TEXT:', TEXT)
sent = NLP.analyze_sentiment(request={'document': data}).document_sentiment
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
        sent.score, sent.magnitude))
Enter fullscreen mode Exit fullscreen mode

Cloud client libraries default to service account credentials, so users have to explicitly provide an API key as an alternative "client option."

No Python 2 support with Cloud client libraries: The "newest" Python 2 version of the NLP client library (1.3.2) does not have all the features necessary to be able to run this script successfully. Python 2 developers either have to use the lower-level client library version shown earlier, or backport the script to an older release of the NLP API that is compatible with the 2.x NLP client library (1.3.2).

The setup steps are the same as above except the first and the last:

  1. Save nlp_sent_query-gcp.py script from above or the repo
  2. Enable Natural Language API
  3. Create an API key (if you haven't already)
  4. Save API key to settings.py as API_KEY='YOUR_API_KEY' (if you haven't already)
  5. Install Natural Language API client library for Python: pip install -U pip google-cloud-language

When everything is set up, running either script in either Python 2 or 3 (except for the 2.x Cloud client library version) results in identical output, so it's not necessary to repeat it here. Three out of four isn't bad.

Natural Language API summary

Juxtaposing both versions is a valuable exercise for Python developers that have to navigate between using GCP and non-GCP Google APIs. Because each product group prefers to recommend their own client libraries, you won't find any discussion or code samples like this in Google documentation. GCP documentation is focused on product client libraries while GWS APIs don't have those, meaning developers must use platform client libraries instead.

Consider this post somewhat of a "PSA" (public service announcement) in this regard since using the latter helps ensure code consistency. During my time at Google, I filed a "friction log," identifying this gap in user experience. (@Googlers: check out http://go/nongcpapiux to see it.) Unfortunately, not much progress had been made by the time I left due to (lack of) "resources." So I have to take matters into my own hands by trying to bridge the chasm for users in a public forum like a blog.

In addition to GCP vs. non-GCP API usage, you definitely won't see any examples of using GCP APIs with the Google APIs client library. It is also quite rare to see any examples of GCP APIs using API keys. In fact, there are only a handful of GCP APIs that accept API keys:

  • Cloud Natural Language API
  • Cloud Speech-to-Text API
  • Cloud Text-to-Speech API
  • Cloud Translation API
  • Cloud Vision API
  • Cloud Endpoints API
  • Cloud Billing Catalog API
  • Cloud Data Loss Prevention API

This list was removed from the GCP documentation in mid-2020 to steer people away from API key usage, especially since service account credentials are the norm. Hopefully moving forward, API keys will be better supported and supported by more Google APIs since they lower the barrier to entry. It is the developer who has the ultimate responsibility of ensuring they restrict API key usage and protect the keys they've already created.

Summary

API keys are the easiest credentials type to learn how to use and implement in applications. However, they are limited in the Google universe in only being supported by APIs that access public data. This includes all the Google Maps APIs as well as a scattered handful of other (Google) APIs as demonstrated in this post. The first post defines API keys and covers how to create & secure them, and this second and final post of this series demonstrates sample apps using different Google APIs that access public data and accept API keys as a credential type.

  • Geolocating a (public) location with the Google Maps Geocoding API
  • Displaying the contents of a public Google Sheet spreadsheet
  • Searching public videos using the YouTube Data API
  • Analyzing a text string with the GCP Natural Language API

These are merely four examples showing you what's possible with just API keys. Pair this with your knowledge of APIs that take OAuth client IDs gives you an even greater set of options when building solutions for your organization or your customers' leveraging Google APIs, also realizing that some APIs take both types of credentials, depending on who owns the data accessed.

What's Next?

If you're excited about Generative AI, especially Google's Gemini API, you'll be happy to know that it is also accessible with API keys. Learn how to use it from the Google AI platform in this standalone post dedicated to the Gemini API. If you're not quite ready for Gemini and LLMs but want to learn more about the Natural Language API, check out the post dedicated to that plus the Cloud Translation API.

Two (credentials types) down, one to go. Stay tuned for another series covering the third and final type of credentials you can use with Google APIs: service accounts. Before that, you'll see posts covering other Google APIs. If you found any errors in this post or have any topics you want to see in future posts, please drop a comment below... I try to respond whenever I can.

References

This post covered quite a bit, so there is a good amount of documentation to link you to:

Blog post code samples

Posts covering other Google APIs using API keys

Google APIs general

YouTube Data API

Google Sheets API

GCP Natural Language API

Google Maps Geocoding API

Other similar content by the author



WESLEY CHUN, MSCS, is a Google Developer Expert (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling "Core Python" series, co-author of "Python Web Development with Django", and has written for Linux Journal & CNET. He runs CyberWeb specializing in GCP & GWS APIs and serverless platforms, Python & App Engine migrations, and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for serverless migration and GWS developers. He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide at conferences, user group events, and universities. Follow he/him @wescpy & his technical blog. Find this content useful? Contact CyberWeb if you may need help or buy him a coffee (or tea)!

Top comments (0)