Introduction
In a previous project, "Docker to the Rescue: Deploying React and FastAPI App With Monitoring", I explored Docker's transformative power in simplifying application deployment.
However as deployment requirements grow, automation becomes essential to ensure consistency, efficiency, and scalability. Manual deployments, particularly during updates, are prone to inefficiencies and inconsistencies, this is a significant challenge in software delivery. Automation addresses this by enabling seamless rollouts of application and infrastructure changes, reducing downtime and human errors.
In this article, I will go through a project where I leveraged automation to deploy a React and FastAPI application by:
- Provisioning Azure resources with Terraform.
- Integrate Ansible into Terraform to configure infrastructure
- Orchestrating application deployment using Docker Compose.
Terraform provisions the server infrastructure required to host the application, while Ansible manages the configuration and setup of the provisioned server. The application (complete application and monitoring stack) is containerized and orchestrated using Docker Compose, ensuring seamless deployment and efficient resource management.
By integrating these tools, the setup becomes repeatable, robust, and adaptable to evolving deployment needs.
Architecture Overview
overview of complete architecture
Implementation Steps
I will walk you through the steps I took to set up this automation process. You can find the application configuration code on my github.
Docker and Dockerhub:
First I took the steps to push my custom application pre-built docker images to dockerhub:
docker tag utibeokon/frontend:v2.0 frontend
docker tag utibeokon/backend:latest backend
docker push utibeokon/frontend:v2.0
docker push utibeokon/backend:latest
The respective images can be found here.
- Frontend: https://hub.docker.com/r/utibeokon/frontend
- Backend: https://hub.docker.com/r/utibeokon/backend
Docker-Compose
I have my docker-compose configuration files in the project folder. Two docker compose files and other service specific config files properly named according the their respective services. The docker compose files contain declarations for the following services:
docker-compose.yml
- Frontend
- Backend
- Postgres
- Adminer
- Traefik
monitoring/docker-compose.yml
- Prometheus
- Grafana
- Loki
- Promtail
- Cadvisor
An overview of how the services interact within a shared docker network.
service interaction diagram
- Frontend connects with Backend
- Backend connects with Postgres for database interaction
- Adminer serves as a database dashboard for administration
Traefik (running on ports 80 and 443) handles reverse-proxy within the network, handling routing, HTTPS redirecting and loadbalancing of requests received by the server
Cadvisor gathers container metrics and sends it to prometheus
Promtail gathers logs for Loki
Prometheus and Loki stores metrics and logs
Grafana accepts Prometheus and Loki as data source and display the data in graphical form using dahsboards.
Do well to confirm the routing configurations for Traefik. You can modify this to fit your routing needs.
Terraform:
In the terraform directory, there are configurations to provision a virtual network, a network security group and a virtual machine on azure. These configs are spread into two modules, network
and vm
. Also, the terraform setup uses azure blob storage as the remote backend.
# main.tf file
# provision a virtual network from the network module
module "network" {
source = "./modules/network"
resource_group_name = var.resource_group_name
location = var.location
vnet_name = "server-vnet"
subnet_name = "server-subnet"
nsg_name = "server-nsg"
allowed_ports = ["22", "80", "443"]
}
# provision a virtual machine instance from vm module
module "vm" {
source = "./modules/vm"
resource_group_name = var.resource_group_name
location = var.location
vm_name = "server"
vm_size = "Standard_B2s"
admin_username = var.admin_username
ssh_public_key = file(var.ssh_key_path)
subnet_id = module.network.subnet_id
}
# dynamically create an inventory file for ansible
resource "local_file" "inventory" {
content = <<EOT
[servers]
${module.vm.public_ip} ansible_user=${var.admin_username} ansible_ssh_private_key_file=~/.ssh/id_rsa
EOT
filename = "../ansible/inventory.ini"
}
# trigger the vm configuration with ansible
resource "null_resource" "ansible_provisioner" {
depends_on = [
local_file.inventory,
module.vm,
module.network
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<-EOT
sleep 60 # allow time for public ip to update
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
-i ../ansible/inventory.ini \
../ansible/playbook.yml
EOT
}
}
# dns record for the server
# dns zone (slready created manually)
data "azurerm_dns_zone" "domain" {
name = var.domain_name
resource_group_name = var.resource_group_name
}
resource "azurerm_dns_a_record" "domain" {
name = "@"
zone_name = data.azurerm_dns_zone.domain.name
resource_group_name = var.resource_group_name
ttl = 3600
records = [module.vm.public_ip]
}
Upon running terraform apply
Terraform will:
- Provision the resources on azure
- Retrieve the public ip and create an
inventory.ini
file in the ansible directory with the public ip details - Trigger ansible to start configuring the server once it is ready
After inspecting the configurations, you can cd into the terraform directory and run
terraform init
terraform apply -auto-approve
Ansible:
The ansible setup consists of a single playbook and 3 roles. The roles are responsible for:
- Preparing the server
- Copying necessary files
- Deploying the application
The playbook is a simple playbook that calls all three roles.
# playbook.yml
- name: Deploy Docker-based application
hosts: servers
become: yes
become_user: root
gather_facts: yes
roles:
- server-setup
- copy-files
- deploy-app
Deployed Application:
On a successful run of terraform apply -auto-approve
, you will get a similar output as this:
If you used the the DNS configuration for Azure DNS, you can access the application via the domain name.
Else, you should copy the IP address, and create an A record the details on your DNS service provider website that maps to the IP address.
Challenges Faced
- Ensuring Seamless Integration Between Terraform and Ansible
- Managing sensitive data like SSH keys, credentials, and API tokens securely across both Terraform and Ansible.
- Debugging Ansible Playbooks: Playbook errors due to mismatched dependencies or configurations on the target VMs slowed down the deployment.
Best Practices
-Automation workflow
Automate every step of the process—from provisioning to configuration—to reduce human error and improve reliability.
State Management
Use remote backends for Terraform state files to enable team collaboration and state consistency.Testing and Validation
Validate Terraform configurations with terraform validate and terraform plan.
Test Ansible playbooks in isolated environments using Molecule before running them in production.
Conclusion
This project demonstrates the power of combining Terraform and Ansible to achieve seamless automation for infrastructure provisioning and application deployment. By leveraging Infrastructure as Code (IaC) and Configuration Management, it’s possible to create a repeatable, reliable pipeline that significantly reduces manual effort.
The challenges encountered, such as ensuring integration between tools and managing sensitive data, provided invaluable learning opportunities. Following best practices like modular design, robust state management, and automated testing ensures that the solution is both scalable and secure.
As a next step, this workflow could be extended by incorporating CI/CD pipelines, adding alerting mechanisms for monitoring, or scaling to multi-cloud environments. Automation isn’t just about efficiency—it’s about building systems that evolve with your needs.
Top comments (1)
Sabinus!!!!