Serverless has become an amazing tool for various use cases. Data processors, chatbots, APIs, you name it is now developed using serverless architectures.
Today, I'll walk you through creating a production-ready Express API running on AWS Lambda with a persistent MongoDB data store. Yes, that's a thing, you can build Express apps on AWS Lambda. Amazing, I know! And yes, you can use MongoDB without batting an eye!
It's pretty straightforward when you think about it. Using AWS Lambda is pretty much the same as using a tiny Node.js runtime. It just abstracts away everything except for the code.
Let's jump in.
TL;DR
You can severely hurt my feelings and jump to the section you're interested in, or just keep reading.
- Project setup
- Creating the database on MongoDB Atlas
- Installing dependencies
- Writing code
- Testing
- Deployment
- Load testing
- Observability
- Wrapping up
Project setup
The setup itself will be bare bones minimum. But, it will still have everything you need to continue adding features for your future production apps. Here's a diagram of the final layout so you can get an overview.
As you can see it's rather simple API for notes with CRUD logic, but it get's the job done. Enough talk, let's get the project up and running.
1. Install the Serverless Framework
First of all you need to install and configure the Serverless Framework. It's a simple CLI tool to make development and deployment incredibly easy.
$ npm i -g serverless
You've now installed the Serverless framework globally on your machine. The Serverless commands are now available to you from wherever in the terminal.
Note: If you’re using Linux, you may need to run the command as sudo.
2. Create an IAM User in your AWS Console
Open up your AWS Console and press the services dropdown in the top left corner. You’ll see a ton of services show up. Go ahead and write IAM in the search box and press on it.
You’ll be redirected to the main IAM page for your account. Proceed to add a new user.
Give your IAM user a name and check the programmatic access checkbox. Proceed to the next step.
Now you can add a set of permissions to the user. Because we are going to let Serverless create a delete various assets on our AWS account go ahead and check AdministratorAccess.
Proceeding to the next step you will see the user was created. Now, and only now will you have access to the users Access Key ID and Secret Access Key. Make sure to write them down or download the .csv file. Keep them safe, don’t ever show them to anybody. I’ve pixelized them even though this is a demo, to make sure you understand the severity of keeping them safe.
With that done we can finally move on to entering the keys into the Serverless configuration.
3. Enter IAM keys in the Serverless configuration
Awesome! With the keys saved you can set up Serverless to access your AWS account. Switch back to your terminal and type all of this in one line:
$ serverless config credentials --provider aws --key xxxxxxxxxxxxxx --secret xxxxxxxxxxxxxx
Hit enter! Now your Serverless installation knows what account to connect to when you run any terminal command. Let’s jump in and see it in action.
4. Create a service
Create a new directory to house your Serverless application services. Fire up a terminal in there. Now you’re ready to create a new service.
What’s a service you ask? View it like a project. But not really. It's where you define AWS Lambda Functions, the events that trigger them and any AWS infrastructure resources they require, all in a file called serverless.yml.
Back in your terminal type:
$ serverless create --template aws-nodejs --path sls-express-mongodb
The create command will create a new service. Shocker! But here’s the fun part. We need to pick a runtime for the function. This is called the template. Passing in aws-nodejs
will set the runtime to Node.js. Just what we want. The path will create a folder for the service. In this example, naming it sls-express-mongodb.
5. Explore the service directory with a code editor
Open up the sls-express-mongodb folder with your favorite code editor. There should be three files in there, but for now we'll only focus on the serverless.yml. It contains all the configuration settings for this service. Here you specify both general configuration settings and per function settings. Your serverless.yml will be full of boilerplate code and comments. Feel free to delete it all and paste this in.
# serverless.yml
service: sls-express-mongodb
custom:
secrets: ${file(secrets.json)}
provider:
name: aws
runtime: nodejs8.10
stage: ${self:custom.secrets.NODE_ENV}
region: eu-central-1
environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
DB: ${self:custom.secrets.DB}
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
plugins:
- serverless-offline
The functions
property lists all the functions in the service. We will only need one function though, because our whole Express app will be packaged into this single function. The handler references which function it is. Our final app will have a server.js
file with a run
function. Simple enough.
Take a look at the events now. They are acting as a proxy. Meaning every request to hit any HTTP endpoint will be proxied into the Express router on the inside. Pretty cool.
We also have a custom
section at the top. This acts as a way to safely load environment variables into our app. They're later referenced by using ${self:custom.secrets.<environment_var>}
where the actual values are kept in a simple file called secrets.json
.
Lastly, we also have the serverless-offline
plugin for offline testing.
Creating the database on MongoDB Atlas
Ready for some more configuration? Yeah, nobody likes this part. But bare with me. Jump over to MongoDB Atlas and sign up.
It’s free and no credit card is required. It’ll be the sandbox we need for playing around. Once you have your account set up, open up your account page and add a new organization.
Pick a name you like, any will do. Press next and go ahead and create the organization.
Nice. That’ll take you to the organization page. Press on the new project button.
This will open up a page to name your project. Just type in whichever name you like and hit next.
MongoDB cares about permissions and security so Atlas will show you another manage permissions page. We can just skip that for now, and create the project.
Phew, there we have it. Finally, we can create the actual cluster! Press on the huge green “Build a new cluster” button. This will open up a huge cluster creation window. You can leave everything default, just make sure to pick the M0 instance size, and disable backups. As you can see the price for this cluster will be FREE. Quite nice. That’s it, hit “Create Cluster”.
After all that, add an admin user for the cluster and give him a really strong password.
Now, you just need to enable access from anywhere. Go to IP whitelist.
Your cluster will take a few minutes to deploy. While that’s underway, let’s start installing some dependencies.
Installing dependencies
This has to be my favorite part of any project... said nobody ever. But hey, we need to make sure this step is done properly so we can have smooth sailing down the road.
$ npm init -y
$ npm i --save express mongoose body-parser helmet serverless-http
$ npm i --save-dev serverless-offline
First of all we're installing production dependencies, of which you surely know about Express, Mongoose, and BodyParser. Helmet is a tiny middleware for securing your endpoints with appropriate HTTP headers. However, the real power lies in the Serverless HTTP module. It'll create the proxy into the Express application and package it into a single lambda function.
Lastly, we need Serverless Offline for testing our app locally. How about we finally write some code now?
Writing code
About time! Let's jump in without any further ado.
1. Creating the server.js
First of all, we need to rename our handler.js
file to server.js
. Here we'll put only the logic for running our lambda function with the serverless-http
module.
// server.js
const sls = require('serverless-http')
const app = require('./lib/app')
module.exports.run = sls(app)
As you can see we're requiring serverless-http
, and exporting a function named run
. This will hold the value of the serverless-http
instance with our app passed as a parameter. That's everything we need to package our Express app into a lambda function! Amazingly simple.
2. Adding secrets
Create the secrets.json
file next for holding the environment variables.
// secrets.json
{
"NODE_ENV": "dev",
"DB": "mongodb://<user>:<password>@<clustername>.mongodb.net:27017,<clustername>.mongodb.net:27017,<clustername>.mongodb.net:27017/<database>?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin&retryWrites=true"
}
To get the connection string for your Atlas cluster, navigate to the cluster dashboard and press the gray connect button. Follow the instructions and make sure the URL looks somewhat like the string above.
3. Creating the Express app
Now we can start writing our actual Express app.
Create a new folder in the root directory called lib
. Here you'll want to create an app.js
file and db.js
file to start with.
// ./lib/db.js
const mongoose = require('mongoose')
mongoose.connect(process.env.DB)
Having mongoose
installed simplifies connecting to the database significantly. This is all we need.
Note: The process.env.DB
was set in the secrets.json
and referenced in the serverless.yml
.
Once you've added the db.js
switch over to the app.js
file. Paste in the snippet below.
// ./lib/app.js
const express = require('express')
const app = express()
const bodyParser = require('body-parser')
app.use(bodyParser.json())
app.use(bodyParser.urlencoded({ extended: true }))
const helmet = require('helmet')
app.use(helmet())
require('./db')
const routes = require('./routes')
app.use('/api', routes)
module.exports = app
If you've ever written any code at all with Express, this will feel familiar to you. We're requiring all the modules, using middlewares, requiring the database connection we just created above and binding routes to the /api
path. But we don't have any routes yet. Well, let's get going then!
4. Adding routes
While in the lib
folder, create a new folder named routes
. It will be the base for all routes in the app. Create an index.js
file in the routes
folder and paste this snippet in.
// ./lib/routes/index.js
const express = require('express')
const router = express.Router()
const notes = require('./notes/notes.controller')
router.use('/notes', notes)
// Add more routes here if you want!
module.exports = router
Now we can just add any additional routes to this file and won't have to touch anything else. That's just so much easier.
5. Writing the CRUD logic
We've reached the fun part. As you can see in the index.js
file from above, we want to require a notes.controller.js
file where we should have CRUD operations defined. Well, let's create it!
However, not to get ahead of ourselves, we first need a model for our Notes API. Create a notes
folder in the routes
folder and inside of it create two more files named note.js
and notes.controller.js
. The note.js
will have our model definition for a note. Like this.
// ./lib/routes/notes/note.js
const mongoose = require('mongoose')
const NoteSchema = new mongoose.Schema({
title: String,
// this is a bug in the markdown - should not have the quotes ""
description: String
})
module.exports = mongoose.model('Note', NoteSchema)
It's more than enough to only have a title and description for this example. Moving on, we're ready to add the CRUD. Open up the notes.controller.js
and paste this in.
// ./lib/routes/notes/notes.controller.js
const express = require('express')
const notesController = express.Router()
const Note = require('./note')
notesController
.post('/', async (req, res, next) => {
const note = await Note.create(req.body)
res.status(200).send(note)
})
notesController
.put('/:id', async (req, res, next) => {
const note = await Note.findByIdAndUpdate(req.params.id, { $set: req.body }, { $upsert: true, new: true })
res.status(200).send(note)
})
notesController
.get('/', async (req, res, next) => {
const notes = await Note.find()
res.status(200).send(notes)
})
notesController
.get('/:id', async (req, res, next) => {
const note = await Note.findById(req.params.id)
res.status(200).send(note)
})
notesController
.delete('/:id', async (req, res, next) => {
const note = await Note.deleteOne({ _id: req.params.id })
res.status(200).send(note)
})
module.exports = notesController
Make sure not to forget requiring the Note model at the top of the file. Apart from that everything is rather straightforward. We're using the usual Mongoose model methods for creating the CRUD operation and of course, the syntax is so lovely with async/await
. You should also think about adding try-catch blocks around the await
operators. But this simple example will suffice like this.
That's it regarding the code. Ready for some testing!
Testing
I'm rather used to testing locally before deploying my apps. That's why I'll quickly run you through how it's done with serverless-offline
. Because you already installed it and added it to the plugins
section in the serverless.yml
all you need to do is to run one command to start the local emulation of API Gateway and AWS Lambda on your local machine.
$ sls offline start --skipCacheInvalidation
Note: In the root directory of your project run sls
and you should see a list of commands. If you configured it correctly, sls offline
and sls offline start
should be available.
To make it easier for you to use this command, feel free to add it as an npm script in the package.json
.
// package.json
{
"name": "a-crash-course-on-serverless-apis-with-express-and-mongodb",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"offline": "sls offline start --skipCacheInvalidation"
// right here!
},
"keywords": [],
"author": "Adnan Rahić",
"license": "ISC",
"dependencies": {
"body-parser": "^1.18.3",
"express": "^4.16.3",
"helmet": "^3.12.1",
"mongoose": "^5.1.7",
"serverless-http": "^1.5.5"
},
"devDependencies": {
"serverless-offline": "^3.20.2"
}
}
Once added, you can run the command npm run offline
instead. A bit shorter and a lot easier to remember. Jump back to your terminal and go ahead and run it.
$ npm run offline
You'll see the terminal tell you a local server has started on port 3000. Let's test it out!
To test my endpoints I usually use either Insomnia or Postman, but feel free to use whichever tool you like. First, start out by hitting the POST endpoint to add a note.
Awesome! It works just as expected. Go ahead and try the GET request next.
It works like a dream. Now, go ahead and try out all the other endpoints as well. Make sure they all work, and then, let's get ready to deploy this to AWS.
Deployment
Would you believe me if I told you all it takes to deploy this API is to run a single command? Well, it does.
$ sls deploy
Back in the terminal, run the command above and be patient. You'll see a few endpoints show up in the terminal. These are the endpoints of your API.
In the same way, as I showed you above, test these deployed endpoints once again, making sure they work.
Moving on from this, you may notice you have only deployed your API to the dev
stage. That won't cut it. We need to change the NODE_ENV
and deploy to production as well. Open up the secrets.json
file and change the second line to:
"NODE_ENV": "production",
This will propagate and set the environment of your Express API to production
and the stage
to production as well. Before deploying the production API let's just delete the node_modules
folder and re-install all modules with the --production
flag.
$ rm -rf ./node_modules && npm i --production
This will make sure to install only the dependencies specified in the dependencies
list in the package.json
, excluding the ones from the devDependencies
list.
Before you deploy, you’ll just have to comment out the plugins section in the serverless.yml
.
# serverless.yml
service: sls-express-mongodb
custom:
secrets: ${file(secrets.json)}
provider:
name: aws
runtime: nodejs8.10
stage: ${self:custom.secrets.NODE_ENV}
region: eu-central-1
environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
DB: ${self:custom.secrets.DB}
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
# comment this out
# plugins:
# - serverless-offline
Go ahead and deploy this with the same command as above.
$ sls deploy
Load testing
This wouldn't be a proper tutorial for setting up a production API if we don't do any load tests. I tend to use a tiny npm module for doing load tests as well. It's called loadtest and can be installed with a simple command.
$ npm i -g loadtest
Note: Linux users will need to prefix the command with sudo
.
Let's start slow. The command we want to run is to hit the /api/notes
path with a GET request 100 times with 10 concurrent users.
$ loadtest -n 100 -c 10 https://<id>.execute-api.eu-central-1.amazonaws.com/production/api/notes
It took roughly 5 seconds to serve all those requests, and it went flawlessly. You can rest assured that whatever scale of API you end up having it will auto-scale to the size you need and serve your users without any issues.
How to gain insight into your system?
The #1 problem with all serverless applications are their distributed nature. Plain and simple, it's impossibly hard to have an overview of all the things going on. Not to mention how hard it is to debug when something goes wrong.
To calm my fears I use Tracetest. It gives you end-to-end testing & debugging powered by OpenTelemetry.
Thankfully, there's detailed documentation, which makes the onboarding process a breeze. Go ahead and follow the Quick Start guide. Don't forget to come back here though. 😄
Wrapping up
This has been an adventurous journey! You've created a production-ready serverless API. Using serverless architectures can be scary. Mainly the services you're not used too, such as Lambda and API Gateway.
The approach I showed above is the way I usually do it. Using Node.js and the frameworks, modules, and middlewares you're used to already makes the transition into serverless much easier.
Luckily we have development tools like the Serverless Framework and observability tools such as Tracetest, which make it incredibly easy to be a developer.
If you missed any of the steps above, here's the repository with all the code.
A crash course on serverless APIs with Express and Mongodb
Quick and easy tutorial on how to create a serverless API and deploy it to AWS Lambda. Persistent data is stored in MongoDB on Atlas clusters. Check out the whole tutorial here.
If you want to read some of my previous serverless musings head over to my profile or join my newsletter!
Or, take a look at a few of my articles right away:
- Solving invisible scaling issues with Serverless and MongoDB
- How to deploy a Node.js application to AWS Lambda using Serverless
- Getting started with AWS Lambda and Node.js
- A crash course on securing Serverless APIs with JSON web tokens
- Migrating your Node.js REST API to Serverless
- Building a Serverless REST API with Node.js and MongoDB
- A crash course on Serverless with Node.js
Hope you guys and girls enjoyed reading this as much as I enjoyed writing it. If you liked it, slap that tiny heart so more people here on dev.to will see this tutorial. Until next time, be curious and have fun.
Disclaimer: Tracetest is sponsoring this blogpost. Use observability to reduce time and effort in test creation + troubleshooting by 80%.
Top comments (18)
Thanks, that was a good documentation of the steps needed, with explanation.
So when we deploy this in AWS, does a new MongoDB connection gets created for every request or should we implement the caching patter that you have documented in
dev.to/adnanrahic/solving-invisibl...
It's already cached. Check this comment. 😄
I'm glad you found this article helpful!
What I mean is, in your other couple of courses you have used the below code in db.js to re-use existing connection, but not in this course, so I am wondering how is it cached & re-used for subsequent requests.
Code I am referring to, from other courses:
const mongoose = require('mongoose');
let isConnected;
module.exports = connectToDatabase = () => {
if (isConnected) {
console.log('=> using existing database connection');
return Promise.resolve();
}
console.log('=> using new database connection');
return mongoose.connect(process.env.DB) // keep the connection string in an env var
.then(db => {
isConnected = db.connections[0].readyState;
});
};
You see the
isConnected
variable?That's what's checking if the mongoose connection is established. If it's true then mongoose won't connect again. 😄
The logic with isConnected variable, that I mentioned in my previous reply is from your other posts, but in this post you seem to have no used this logic with isConnected variable in the db.js, so that's why I was confused.
So are you saying this isConnected logic is needed so that every request doesn't create a new connection?
I understand that this is a 2 years old post & you may not remember what is in the code of this post versus the other posts of your.
So, request you to please check & confirm if the db.js code mentioned in this post also uses cached connection & if yes how?
For your quick reference below is the db.js code in this post:
// ./lib/db.js
const mongoose = require('mongoose')
mongoose.connect(process.env.DB)
I saw the comment, but was a bit confused when I saw your other post on scaling issues with Serverless & mongoDB, where you had taken care of re-using the connection in the code & in this article there is no caching related code. So how is it cached without any explicit code? does the mongoose.connect take care of not creating a new connection when there is an open connection?
Thanks the info. I could able to get this working but I noticed that each call to the api is creating new connection to mongo and is not closed unless I explicitly close the mongoose connection at the end of each function call. Is this the correct approach to handle mongo connections in serverless?
It's not creating a new connection for every call. What's happening is that every time a lambda function is created the connection is pooled and kept alive for that instance. Every subsequent invocation that runs in that instance of the lambda function will re-use the already opened connection.
This is the preferred behavior of accessing databases with serverless functions. Cache the connection so it can be re-used as long as the function is alive. Once the function is destroyed, the connection will be closed.
Thanks Adnan. I had my offline serverless started without "--skipCacheInvalidation" which was causing to create new connection for every call.. Add the --skipCacheInvalidation resolved the issue.
But with --skipCacheInvalidation every time I need to manually restart the offline app to load my changes which is difficult for the development.. the document says that nodemon should be combined with it but not sure how to do that.. Appreciate if you could share how that is setup for development debugging..
Getting note is not defined at notesController.post having a hard time figuring out why that would be not defined
That's strange. Is it undefined because of bad request parameters or is Mongoose not posting to the database at all?
This always happens, I figured it out right after I sent that to you. It was a total brain malfunction on my part. Since I adapted your tutorial for an api using boards instead of notes I did a find a replace changing all of note to board but I didn't take into account the uppercase. I haven't had caffeine yet today, brain not working yet!
Anyways, I really enjoyed your tutorial! I don't find many I like but yours killed it. Thanks!
Sweet! Glad you liked it.
Thanks so much for the write-up. It is very informative, however I have been unable to get the code to function. What would be a good forum to ask troubleshooting questions?
Feel free to write here about the issues you're facing. Another thing you can do is to check the code in the repo and see what you're doing wrong.
adnanrahic / a-crash-course-on-serverless-apis-...
A crash course on serverless APIs with Express and Mongodb
Quick and easy tutorial on how to create a serverless API and deploy it to AWS Lambda. Persistent data is stored in MongoDB on Atlas clusters. Check out the whole tutorial here.
I have used serverless and express. It works perfectly for most of the time but sometimes i get 502 bad gateway (internal server error). kindly help me to fix this.
How much does it cost monthly this implementation? I'm curios about starting on AWS because the pricing is sometimes a little confusing (I'm using Google Cloud currently)
It's rather cheap for hosting apps of this type. Here are a couple of calculators you can use to estimate the price of both Lambda and API Gateway. Have in mind you'll be paying for MongoDB Atlas separately. Here's their pricing calculator.
If you want to see how big of a difference having a serverless architecture is, read this Twitter thread by Troy Hunt.