There are a lot of articles showing how to use serverless functions for a variety of purposes. A lot of them cover how to get started, and they are very useful. But what do you do when you want to organize them a bit more as you do for your Node.js Express APIs?
There's a lot to talk about on this topic, but in this post, I want to focus specifically on one way you can organize your code. Add a comment to let me know what other areas you are interested in, and I'll consider covering those in the future.
Here are some getting started resources that I recommend:
- Your first Azure Function with the VS Code Functions Extension
- Azure Functions Overview
- Developer Guide
- Sample Code in a GitHub Repo
- JavaScript SDK for Cosmos DB
Why Should You Structure Your Code?
You can put all of your function logic in a single file. But do you want to do that? What about shared logic? Testing? Debugging? Readability? This is where having a pattern or structure can help. There are many ways to do this. Beyond those I mentioned above, consistency is the primary additional aspect I target.
Here is a pretty standard representation of a function app:
FunctionApp
| - host.json
| - myfirstfunction
| | - function.json
| | - index.js
| | - ...
| - mysecondfunction
| | - function.json
| | - index.js
| | - ...
| - sharedCode
Here is what my structure is looking like, just for the heroes API.
Your Entry Point
The entry point to your function is in a file called index.js
in a folder with the same name as your function.
The function itself is pretty self-explanatory. When this function is called the context is passed to it. The context has access to the request and response objects, which is super convenient. Then I call the asynchronous operation to get my data and set the response.
// heroes-get/index.js
const { getHeroes } = require('../shared/hero.service');
module.exports = async function(context) {
context.log(
'getHeroes: JavaScript HTTP trigger function processed a request.'
);
await getHeroes(context);
};
An alternative is to pass
context.res
and/orcontext.req
into the service. How you choose is up to you. I prefer to passreq
andres
in as it's more familiar to me. But passingcontext
also allows access to other features, such ascontext.log
. There is no right or wrong here, choose your adventure and be consistent.
Data Access Service
When you create your first Azure function, the "hello world" function usually returns a static string message. In most APIs, you're going to want to talk to another database or web service to get/manipulate data before returning a response.
In my case, I am getting a list of heroes. So I defer most of my data access service to a Node.js module I named hero.service.js
. Why do this? Simply put, organizing my code (in this case the data access service) so it is DRY (do not repeat yourself) and isolates the responsibility and makes it easier to scale, extend, debug, and test.
The hero.service.js
module begins by getting a reference to the container (the storage unit that contains my data for my database). Why abstract that? Good question ... I abstract it to a shared module so I can reuse that logic. I'll be needing to get containers of all types, and getting the container requires accessing the database with some database specific connectivity APIs. We'll look closer at that in a moment.
The getHeroes
service accepts the context and uses destructuring to get the response object out into a variable res
. Then it tries to get the heroes, and when successful it adds them to the response. When it fails, it responds with an error.
// shared/hero.service.js
const { heroes: container } = require('./index').containers;
async function getHeroes(context) {
let { req, res } = context;
try {
const { result: heroes } = await container.items.readAll().toArray();
res.status(200).json(heroes);
} catch (error) {
res.status(500).send(error);
}
}
Shared Database Module
The data access service module hero.service.js
imports from a shared database module. This module is where the magic happens for connecting to our database. In this case, I am using Azure's CosmosDB via its Node.js SDK in npm.
Notice that the code reads in the secrets via the Node.js environment variables. Then it merely exports the containers from the appropriate database. I can use different environment variables without requiring the code to change.
// shared/index.js
const cosmos = require('@azure/cosmos');
const endpoint = process.env.CORE_API_URL;
const masterKey = process.env.CORE_API_KEY;
const databaseDefName = 'vikings-db';
const heroContainerName = 'heroes';
const villainContainerName = 'villains';
const { CosmosClient } = cosmos;
const client = new CosmosClient({ endpoint, auth: { masterKey } });
const containers = {
heroes: client.database(databaseDefName).container(heroContainerName),
villains: client.database(databaseDefName).container(villainContainerName)
};
module.exports = { containers };
What's Your Route?
I didn't want my API to be /api/heroes-get
but rather I prefer /api/heroes
when executing the GET
action, so I changed that. My function is in the path /heroes-get/index.js
and inside that same folder, there is a function.json
file. This file is where you configure the function's behavior. The key one I wanted to change was the route alias. Notice I changed this by setting route: heroes
in the code block below.
Now my endpoint is api/heroes
.
// function.json
{
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"],
"route": "heroes"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
What's the Point?
Organizing your code and isolating logic only makes your life easier if it had some tangible positive effect, so let's explore that. When writing your next function for updating heroes, the function could look like the following code.
const { putHero } = require('../shared/hero.service');
module.exports = async function(context) {
context.log('putHero: JavaScript HTTP trigger function processed a request.');
await putHero(context);
};
Do you notice that it looks very similar to the function for getting heroes? There is a pattern forming, and that's a good thing. The big difference here is that the code is calling putHero
in the hero.service.js
module. Let's take a closer look at that.
The logic for updating the heroes is isolated. Isolation is one of the main jobs of the hero.service.js
, along with the logic for getting the heroes.
Thinking forward, the logic for delete, insert, and any other operations also could go in this module and be exported for use in other functions. This makes it relatively simple to extend this structure to other actions and models.
// shared/hero.service.js
const { heroes: container } = require('./index').containers;
async function getHeroes(context) {
// ...
}
async function putHero(context) {
const { req, res } = context;
const hero = {
id: req.params.id,
name: req.body.name,
description: req.body.description
};
try {
const { body } = await container.items.upsert(hero);
res.status(200).json(body);
context.log('Hero updated successfully!');
} catch (error) {
res.status(500).send(error);
}
}
More Serverless
Share your interests, and as I write more posts on Serverless I'll keep them in mind! In the meantime here are those resources again, in case you want some getting started materials:
- Your first Azure Function with the VS Code Functions Extension
- Azure Functions Overview
- Developer Guide
- Sample Code in a GitHub Repo
- JavaScript SDK for Cosmos DB
Credit and Thanks
Thanks to Marie Hoeger for reviewing the content of this post and taking my feedback. You should definitely follow Marie on Twitter!
Top comments (15)
Yes, please post more on this topic. I'm just learning to create my first node APIs using Express. Do you think I should try using Express with Azure functions or just ditch Express and go with a bare bones approach. It could end up being a fairly big API.
My 2cents on this, before code make the definition document with swagger editor (now open API) if you don't know learn this first as it will make your API more portable, then code it on express directly, once is up and running you can stay a migration to serverless endpoint by endpoint
Pd swagger editor can create servers and clients from a definition file automatically
Node and express are a fantastic and proven combination for apis. And it can do so much more. Serverless is a choice that helps scale up and down without the need for much server config
I would still lean on both as they are both excellent options.
If you need help designing an api with serverless I know the azure functions team is always very helpful with this as they help a lot of companies be successful.
+1 to Jaun's suggestion about starting with the definition document!!
To your question about starting with express and migrating vs. starting bare bones - I think it depends on your needs. Express has extensive middleware that can be very useful. Azure Functions has built-in functionality like proxies, easy integration to other services with triggers and bindings, and immediate deploy to prod. To echo John, both are great options.
As an aside, although this isn't an official package, you may be interested in checking out azure-functions-express on npm too!
Thanks everyone. I saw azure-functions-express but wasn't sure whether to go down that route or whether that was trying to crow bar something inappropriate into functions.
I need to try azure so that's for the article.
I use firebase quite a bit, but I'm not happy with some of the choices they made like make node v6 the default... Like c'mon, v8 is the newest version you support, and I have explicitly put that in the config file so my clients have to come back for an upgrade later (maybe it has auto upgrade ^ support but I'm not going to try it with a piece of software that so consistently silently fails)?
That and the way they handle promises. That's a complicated one that would require a long, long tyraid about what not to do if you want to lint with your deploy tool and "optimize" your platform without creating unclear and seemingly random artifacts in production.
If Azure solves either of the problems, it might have new fanboy.
Hope you liked it!
Azure Functions offers Node 6 and Node 8,10. Depends on which version you use. To make it simple, if you create one today, you likely get 8 or 10. Here is a doc that explains all of this. docs.microsoft.com/en-us/azure/azu...
I currently have several clients on large MSSQL databases. I've been considering going this route,but I've also got a bunch of WebAPI servers written in C#. The idea of moving each API over, one by one, is appealing, but I'm not certain it's a good idea. Without changing from MSSQL, what would you recommend?
First, I have to apologize because I'm not too familiar with working with MS SQL databases!
Could you connect to your databases with a SqlConnection within your serverless function though? This documentation refers to some things to do to avoid connection limitations from SqlClient connections.
Another thing to keep in mind is that sometimes serverless functions scale too much for a database and hammer a non-scaling database with requests. To address those concerns, you can configure HTTP behavior and impose limits on function app scale out. Even if you constrain scale, you'll still receive the benefits of serverless :)
Thanks for sharing, I find this structure easy to understand and reminds me a lot about Angular.
Thanks, I’m glad it it’s helpful. I’m thinking of posting more content on this topic.
Thanks for sharing!
Hi - no reason.
const
is my preferred when no changes exist.