Over the past few years, more and more companies have decided to go the microservices way. Some companies are now happy with their choice, but others have unfortunately encountered various problems. In this article, we look at the cost issue of a microservices architecture and try to provide a solution.
A big issue with microservices architecture is the cost of infrastructure. A microservice is a small application that runs inside a container. To create a microservices architecture, you need to manage tens or hundreds of containers. Obviously, this comes at a high cost in terms of the price of the machines running the containers, or cloud services if the containers are hosted in the cloud.
Now we're looking at AWS for cloud computing of our microservices. We need to know that in AWS we have two different ways of running containers, we can use EC2 machines or we can use Fargate.
With EC2, we have to pay a fixed price to AWS to run containers. A microservices architecture can consist of hundreds of microservices, and the cost of that can be very expensive.
With Fargate, the containers are serverless, so when we are not using them, they are off. We only pay for the containers while they are active.
In general, from a cost perspective, serverless resources make sense for low usage, while always-on resources make sense for high usage.
Within a microservices architecture, there are services that are called with high frequency and others that are called with low frequency, this makes you understand that some containers, unlike others, will be used very little and this represents a waste. A recent idea is to build a microservices architecture using serverless functions called Lambda, in this way microservices become a set of functions that only run when called. These functions are often called FaaS or Functions as a Service. This solution costs less than Fargate.
The image above shows an example of a serverless architecture implemented on AWS. This is a solution to save money by using a lambda serverless functions instead of microservices. This way we don't need containers and we write code without worrying about server. This is a modern solution that many companies are using today.
The first problem is that we need to create a Lambda for each function, so for large application domain we'll find a large number of Lambda function. Managing too many Lambda function may not be easy.
The second problem is in the resource consuming, for example using more Lambda on the same database the DBMS it can manage too connections. This applies to all resources used, not just the database.
The third problem is the start time for the Lambda startup when it is called in the off state. In fact, if we call a Lambda function in the off state, it'll perform a startup which will slow down the first execution. The lifecycle of a Lambda will last for a few minutes, then it'll be back in off and we'll find a new start time for the next execution.
The image above shows an example of microservices encapsulated within Lambda functions. It may look strange, but it is a powerful and versatile solution. So we can write microservices normally without thinking about concepts of serverless functions, we can deploy our microservices on Lambda functions, we can migrate our microservices to containers later.
My advice is to start with microservices within lambda function until they have low usage. When the microservice has high usage then we'll migrate to container, we can choose Fargate for high usage, EC2 for very high usage.
Now I'll show you how to implement this thing in Node.js using Fastify with TypeScript.
Let's start by creating a route as Fastify plugin.
import { FastifyPluginAsync } from 'fastify'
const routes: FastifyPluginAsync = async server => {
server.get('/api/hello-world', async () => {
return {
hello: 'world'
}
})
}
export default routes
And then we create a simple Fastify application that uses the plugin above.
import fastify from 'fastify'
const port = 5100
const server = fastify()
server.register(import('./routes'))
server.listen({ port }, () => {
console.log(`http://localhost:${port}/`)
})
This simple application can be seen as a first step towards creating a microservice. In this case, we are talking about a microservice running in a container, because we are implementing a web server listening on port 5100.
import fastify from 'fastify'
import awsLambdaFastify from '@fastify/aws-lambda'
const server = fastify()
server.register(import('./routes'))
export const lambda = awsLambdaFastify(server)
We've modified the code to create a Lambda function using '@fastify/aws-lambda'. This Lambda can then be deployed to AWS, for example using CloudFormation or Terraform to perform the deploy. As you can see, it's very easy going from container to Lambda and from Lambda to container.
In this repository we show an example of Terraform usage for Lambda deploy.
For container orchestration, we can use EKS or ECS. ECS is the first choice if you're in AWS, but EKS is important if you're a Kubernetes expert and you want to keep using it.
We've learned a way to write microservices, which run in a container or as lambda functions, and it's very easy to move from lambda to container or vice versa. So we've got a lot of control over the cost of our architecture, safely executed in the cloud.
The world of microservices is in continuous evolution, I think that encapsulating microservices to serverless functions is a good practice for services with low frequency of using. I also believe that using Semi-Monolithic Architecture could become important, but we will talk about this in the next episodes.
Top comments (0)