DEV Community

Cover image for Automating Multi-Cloud Infrastructure with Terraform: Handling Provider Differences
mark mwendia
mark mwendia

Posted on

Automating Multi-Cloud Infrastructure with Terraform: Handling Provider Differences

Why Multi-Cloud and Terraform?

As businesses continue to scale and adopt cloud-native architectures, many find themselves relying on more than one cloud provider for their infrastructure. This "multi-cloud" approach offers flexibility, resilience, and optimized costs. However, managing infrastructure across multiple cloud platforms—each with its own APIs, tools, and configurations—can become overwhelming.

Enter Terraform, an open-source infrastructure as code (IaC) tool that allows you to define cloud and on-prem infrastructure using declarative configurations. Terraform's support for multiple providers—AWS, Google Cloud, Azure, and more—makes it the ideal tool for managing multi-cloud environments. In this article, we'll explore how you can automate infrastructure across multiple clouds using Terraform while addressing provider-specific differences.

Prerequisites

Before diving into the code and concepts, ensure you have the following:

  • Basic Knowledge of Terraform: Familiarity with Terraform basics, such as providers, resources, and modules.
  • Access to Cloud Accounts: You’ll need accounts with AWS, Google Cloud, Azure (or any other combination) to follow along.
  • Terraform Installed: Ensure you have Terraform installed locally (v1.x).
  • Credentials Setup: API keys or credentials for each cloud provider. You can use Terraform's provider-specific authentication methods.

Why Go Multi-Cloud?

  1. Flexibility & Optimization Each cloud provider has its own set of strengths. AWS might have better networking, while Google Cloud offers superior data analytics services. By adopting a multi-cloud strategy, businesses can leverage the best offerings from each provider.
  • Resilience & Availability Relying on one cloud provider creates a single point of failure. With a multi-cloud approach, your infrastructure becomes more resilient because if one cloud experiences downtime, others can handle the load.

Step 1: Setting Up Multiple Providers

The key to multi-cloud infrastructure automation with Terraform is using multiple providers. Providers are what Terraform uses to interact with APIs of different platforms. Each cloud provider (AWS, Google Cloud, Azure, etc.) has its own Terraform provider plugin.

Example: Defining Multiple Providers in Terraform
Here's a basic configuration for AWS and Google Cloud providers:



# Configure the AWS provider
provider "aws" {
  region = "us-west-2"
  access_key = "your-access-key"
  secret_key = "your-secret-key"
}

# Configure the Google Cloud provider
provider "google" {
  credentials = file("<path-to-your-credentials-file>")
  project     = "your-google-cloud-project-id"
  region      = "us-central1"
}


Enter fullscreen mode Exit fullscreen mode

Explanation:
The provider block defines which cloud platform Terraform should interact with.
You can define multiple providers within a single Terraform configuration file.
Each provider block can be parameterized to allow flexibility.

Step 2: Managing Provider-Specific Differences

Every cloud provider has unique resource definitions, API behavior, and configurations. Handling these differences is crucial for successfully deploying a multi-cloud infrastructure. In Terraform, this is typically managed by writing provider-specific modules and using conditionals for provider differences.

Example: Creating Provider-Specific Resources
Let’s deploy a virtual machine (VM) on both AWS and Google Cloud. Here’s how you handle the differences between the two providers.

  1. AWS EC2 Instance Configuration: ```hcl

resource "aws_instance" "aws_vm" {
ami = "ami-12345678"
instance_type = "t2.micro"

tags = {
Name = "Multi-Cloud-AWS-VM"
}
}


2. Google Cloud Compute Instance Configuration:
```hcl


resource "google_compute_instance" "gcp_vm" {
  name         = "multi-cloud-gcp-vm"
  machine_type = "f1-micro"
  zone         = "us-central1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
    }
  }

  network_interface {
    network = "default"
  }
}


Enter fullscreen mode Exit fullscreen mode

Explanation:

  • In the AWS block, we’re using an Amazon Machine Image (AMI) to create a virtual machine.
  • In Google Cloud, we’re using a google_compute_instance to create a virtual machine with specific boot disk parameters.
  • The key takeaway is that while the end goal (a VM) is the same, the way it's defined and deployed varies by provider.

Step 3: Leveraging Terraform Modules for Reusability

Terraform modules allow you to encapsulate and reuse code. In a multi-cloud environment, you can create provider-specific modules and call them conditionally based on which provider you're deploying to.

Example: Creating a Module for VM Deployment
Let’s create a simple module that deploys a VM, abstracting the provider differences.

File: vm_module/main.tf



variable "provider" {
  type    = string
  default = "aws"
}

# AWS VM Configuration
resource "aws_instance" "vm" {
  count         = var.provider == "aws" ? 1 : 0
  ami           = "ami-12345678"
  instance_type = "t2.micro"
}

# GCP VM Configuration
resource "google_compute_instance" "vm" {
  count         = var.provider == "gcp" ? 1 : 0
  name          = "multi-cloud-gcp-vm"
  machine_type  = "f1-micro"
  zone          = "us-central1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
    }
  }

  network_interface {
    network = "default"
  }
}


Enter fullscreen mode Exit fullscreen mode

Explanation:
The count parameter is used to conditionally create resources based on the value of var.provider. If the provider is aws, it creates an AWS instance. If gcp, it creates a GCP instance.
Now, in your main Terraform configuration, you can call this module and pass the provider value:



module "vm_deployment" {
  source   = "./vm_module"
  provider = "aws"  # Change this to "gcp" for Google Cloud
}


Enter fullscreen mode Exit fullscreen mode

Step 4: Handling Differences in Networking

Each cloud provider handles networking differently. AWS uses VPCs (Virtual Private Clouds), while Google Cloud uses a simpler networking model with Subnets.

Example: Setting Up Networking for AWS and GCP
For AWS:



resource "aws_vpc" "vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "multi-cloud-aws-vpc"
  }
}

resource "aws_subnet" "subnet" {
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "us-west-2a"
}


Enter fullscreen mode Exit fullscreen mode

For Google Cloud:



resource "google_compute_network" "vpc" {
  name = "multi-cloud-gcp-vpc"
}

resource "google_compute_subnetwork" "subnet" {
  name          = "multi-cloud-gcp-subnet"
  ip_cidr_range = "10.0.1.0/24"
  region        = "us-central1"
  network       = google_compute_network.vpc.name
}


Enter fullscreen mode Exit fullscreen mode

Step 5: Automating with CI/CD

For real-world deployments, you’ll want to automate the entire infrastructure provisioning process. Using CI/CD pipelines with tools like GitLab, Jenkins, or GitHub Actions, you can trigger Terraform deployments whenever changes are made to your infrastructure code.

Example: Automating Multi-Cloud with GitHub Actions
Here’s a simple GitHub Actions workflow that runs Terraform to provision infrastructure across both AWS and Google Cloud.

File: .github/workflows/terraform.yml



name: Multi-Cloud Terraform Deployment

on:
  push:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1

      - name: Initialize Terraform
        run: terraform init

      - name: Plan Terraform
        run: terraform plan

      - name: Apply Terraform
        run: terraform apply -auto-approve


Enter fullscreen mode Exit fullscreen mode

Explanation:

  • The workflow runs every time you push changes to the main branch.
  • It checks out your Terraform code, initializes Terraform, and applies the changes across your defined cloud providers.

Step 6: Best Practices for Managing Multi-Cloud with Terraform

  1. Provider-Specific Modules: Abstract provider differences by creating reusable modules.
  2. Environment Segregation: Use workspaces or separate state files for managing environments (e.g., dev, staging, production).
  3. Remote State Management: Store Terraform state files in a remote backend like AWS S3 or Google Cloud Storage to avoid conflicts and ensure consistency.
  4. Version Pinning: Pin the versions of your Terraform providers to avoid breaking changes when providers are updated.
  5. Security and Compliance: Leverage IAM roles and policies for fine-grained access control to avoid exposing sensitive credentials.

Conclusion

Managing multi-cloud infrastructure can be complex, but Terraform simplifies the process by providing a unified, declarative approach to provisioning resources. By understanding and addressing provider-specific differences, leveraging reusable modules, and automating deployments via CI/CD pipelines, you can build a resilient, flexible, and scalable infrastructure across multiple

Top comments (0)