DEV Community

Samuel Udeh
Samuel Udeh

Posted on

Managing high traffic applications with AWS Elastic Load Balancer and Terraform

Introduction
Managing high-traffic applications can be a daunting challenge, especially when trying to ensure that the system is both scalable and highly available. As businesses grow and their user bases expand, the ability to scale infrastructure dynamically becomes crucial. AWS Elastic Load Balancer (ELB) is one of the most powerful tools for distributing incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, or IP addresses. When paired with Terraform, an infrastructure-as-code tool, it becomes even easier to provision, manage, and scale high-traffic applications.

Image description

What is AWS Elastic Load Balancer?
AWS Elastic Load Balancer (ELB) is a fully managed service that automatically distributes incoming application traffic across multiple targets to ensure that your application performs reliably.
AWS offers three types of load balancers:
Application Load Balancer (ALB)
Best suited for load balancing of HTTP and HTTPS traffic. Operates at the
application layer (Layer 7) of the Open Systems Interconnection (OSI) model.
Network Load Balancer (NLB)
Best suited for load balancing of TCP, UDP, and TLS traffic. Can scale up and
down in response to load faster than the ALB (the NLB is designed to scale to
tens of millions of requests per second). Operates at the transport layer (Layer 4)
of the OSI model.
Classic Load Balancer (CLB)
This is the “legacy” load balancer that predates both the ALB and NLB. It can handle HTTP, HTTPS, TCP, and TLS traffic but with far fewer features than either the ALB or NLB. Operates at both the application layer (L7) and transport layer (L4) of the OSI model.
Most applications these days should use either the ALB or the NLB. Because the simple web server example you’re working on is an HTTP app without any extreme performance requirements, the ALB is going to be the best fit.

Key Components
Elastic Load Balancer (ELB): Distributes incoming application traffic across multiple targets, such as EC2 instances.
Auto Scaling Groups (ASG): Automatically adjusts the number of EC2 instances based on traffic demands.
Health Checks: Ensures that only healthy instances receive traffic.
Security Groups: Controls access to the load balancer and backend instances.

Image description
Setting Up The Infrastructure
Step 1: Define the provider

Image description
Provider: Specifies the AWS provider and the region where the resources will be created.
Data Resource: Fetches the available availability zones for resource distribution.

Step 2: Create the VPC and Subnets

Image description

Image description
VPC: Creates a VPC with a CIDR block of 10.0.0.0/16.
Internet Gateway: Allows external access to the VPC.
Route Table: Defines routes that send traffic to the internet.
Subnets: Creates two public subnets in different availability zones for high availability.

Step 3: Create Security Groups

Image description
Security Group: Allows incoming traffic on port 80 (HTTP) from any IP address. Egress rules allow all outbound traffic.

Step 5: Create Application Load Balancer

Image description
ALB: Creates an application load balancer that distributes incoming traffic across the healthy targets. It is publicly accessible and associated with the security group and public subnets.

Step 6: Create Target Group

Image description
Target Group: Defines a group of instances that the ALB will forward traffic to. It includes health checks to monitor the status of the instances

Step 7: Create Load Balancer Listener

Image description
Listener: Configures the ALB to listen on port 80 and forward traffic to the target group.

Step 8: Create Launch Template

Image description
Launch Template: Defines the configuration for EC2 instances, including the AMI ID, instance type, and network settings.

Step 9: Create Auto Scaling Group

Image description
ASG: Manages a group of EC2 instances. It defines the desired, minimum, and maximum number of instances. It uses the launch template for configuration.

Step 10: Create Scaling Policies

Image description
Scaling Policies: Define how the ASG should respond to changes in demand. The scale-out policy increases the number of instances, while the scale-in policy decreases them.

Step 11: Create Output

Image description
Output: Displays the DNS name of the load balancer once the infrastructure is created.

Image description

Congratulations!
This Terraform configuration sets up a robust architecture for handling high-traffic applications by combining an Application Load Balancer with an Auto Scaling Group. The ALB ensures efficient traffic distribution, while the ASG automatically adjusts the number of EC2 instances based on demand. This setup enhances reliability, scalability, and performance.

Top comments (0)