DEV Community

Cover image for Deploying a Highly Available Web App on AWS Using Terraform
Otu Udo
Otu Udo

Posted on

Deploying a Highly Available Web App on AWS Using Terraform

In today’s cloud-centric world, scalability, reliability, and automation are key to maintaining robust web applications. Terraform, an infrastructure-as-code (IaC) tool, simplifies deploying complex infrastructure setups. This will guide you through deploying a clustered web server on AWS with Terraform, incorporating Auto Scaling and an Application Load Balancer (ALB) for enhanced performance and high availability.


What You'll Learn

  1. Set up an Auto Scaling Group (ASG) for managing server instances.
  2. Configure an Application Load Balancer to distribute traffic.
  3. Use a Launch Template for consistent instance provisioning.
  4. Test the deployment by simulating scaling behaviour.

Step 1: Key Components of the Architecture

  • AWS Launch Template: Defines the AMI, instance type, and configuration for new instances.
  • Auto Scaling Group (ASG): Manages the desired number of instances and adjusts capacity based on demand.
  • Application Load Balancer (ALB): Distributes incoming traffic evenly across the instances.
  • Security Group: Manages inbound and outbound traffic for your instances.

Step 2: Terraform Script Overview

Below is the Terraform configuration file to set up the infrastructure: main.tf

provider "aws" {
  region = "us-east-1"
}

# Variable for SSH Key
variable "key_name" {
  description = "Key pair for SSH access" #key_name is confidential
}

# Security Group for Web Server
resource "aws_security_group" "web_sg" {
  name        = "web_sg"
  description = "Allow HTTP and SSH"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Launch Template for Auto Scaling Group
resource "aws_launch_template" "web_launch_template" {
  name          = "web-server-template"
  image_id      = "ami-0453ec754f44f9a4a" # Amazon Linux 2
  instance_type = "t2.micro"
  key_name      = var.key_name

   user_data = base64encode(<<-EOF
              #!/bin/bash
              sudo yum update -y
              sudo yum install -y httpd
              sudo systemctl start httpd
              sudo systemctl enable httpd
              echo "Hello from Server $(hostname)" > /var/www/html/index.html
              EOF
  )

  network_interfaces {
    security_groups = ["sg-0fd8d2cfa6f3af754"]
  }

  tag_specifications {
    resource_type = "instance"
    tags = {
      Name = "web-server"
    }
  }
}

# Auto Scaling Group
resource "aws_autoscaling_group" "web_asg" {
  desired_capacity     = 2
  max_size             = 5
  min_size             = 1
  launch_template {
    id      = aws_launch_template.web_launch_template.id
    version = "$Latest"
  }

  vpc_zone_identifier = ["subnet-036596d61d8685f5a", "subnet-08a1646c47df46597"] # Replace with your subnets

  target_group_arns = [aws_lb_target_group.web_lb_tg.arn]

  tag {
    key                 = "Name"
    value               = "web-server-asg"
    propagate_at_launch = true
  }
}

# Application Load Balancer
resource "aws_lb" "web_lb" {
  name               = "web-cluster-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.web_sg.id]
  subnets            = ["subnet-036596d61d8685f5a", "subnet-08a1646c47df46597"] # Replace with your subnets

  enable_deletion_protection = false
}

# Target Group
resource "aws_lb_target_group" "web_lb_tg" {
  name        = "web-target-group"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = "vpc-06c4398e7067f32b4" # Replace with your VPC ID
  target_type = "instance"

  health_check {
    path                = "/"
    interval            = 30
    timeout             = 5
    healthy_threshold   = 2
    unhealthy_threshold = 2
    matcher             = "200"
  }
}

# Listener for Load Balancer
resource "aws_lb_listener" "web_lb_listener" {
  load_balancer_arn = aws_lb.web_lb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.web_lb_tg.arn
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploying the Infrastructure

  1. Initialize Terraform:
   terraform init
Enter fullscreen mode Exit fullscreen mode
  1. Review the Plan:
   terraform plan
Enter fullscreen mode Exit fullscreen mode
  • Check for potential changes and resource creation.
  1. Apply the Configuration:
   terraform apply
Enter fullscreen mode Exit fullscreen mode
  • Confirm the action by typing your key_value and yes.

Step 4: Verifying the Deployment

  1. Load Balancer:

Image description

  • Find the DNS Name of the ALB in the AWS Console and open it in your browser.
  • You should see: "Hello from Server <hostname>".

Image description

  1. Auto Scaling:

Image description

  • Test scaling by adjusting the desired_capacity in the Auto Scaling Group and reapplying Terraform:

     desired_capacity = 3
    

Before scaling
Image description

After Scaling

Image description


Conclusion

By leveraging Terraform, you’ve created a scalable and robust web server deployment. The Auto Scaling Group ensures high availability, while the Load Balancer distributes traffic efficiently. This setup serves as a strong foundation for more advanced infrastructure automation.

Have you tried deploying an Auto Scaling web server? Share your thoughts and experiences in the comments!

Top comments (0)