DEV Community

Cover image for Managing Terraform State: Best Practices for DevOps
Otu Udo
Otu Udo

Posted on

Managing Terraform State: Best Practices for DevOps

When working with Terraform, managing the state file is crucial to ensuring the reliability and security of your infrastructure. The Terraform state file is the single source of truth for your infrastructure, storing information about the resources you've deployed. Let's explore best practices for managing Terraform state using S3 and DynamoDB and walk through a step-by-step implementation using the code provided below.


Step-by-Step Guide to Managing Terraform State

Step 1: Set Up the AWS Provider

The first step is to configure the AWS provider, specifying the region where resources will be created. In this example, the us-east-1 region is used:

provider "aws" {
  region = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create an S3 Bucket for Remote State Storage

An S3 bucket is configured to store the Terraform state file. The prevent_destroy lifecycle rule ensures the bucket cannot be accidentally deleted:

resource "aws_s3_bucket" "terraform_state" {
  bucket = "otu-bucket-state"
  lifecycle {
    prevent_destroy = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Enable Versioning on the S3 Bucket

Versioning is enabled on the bucket to maintain a history of state file revisions. This allows recovery in case of accidental overwrites:

resource "aws_s3_bucket_versioning" "enabled" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Set Up Server-Side Encryption

To secure the state file, server-side encryption using the AES256 algorithm is enabled:

resource "aws_s3_bucket_server_side_encryption_configuration" "default" {
  bucket = aws_s3_bucket.terraform_state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 5: Block Public Access to the S3 Bucket

Public access to the S3 bucket is explicitly blocked to protect sensitive information:

resource "aws_s3_bucket_public_access_block" "public_access" {
  bucket = aws_s3_bucket.terraform_state.id
  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restrict_public_buckets = true
}
Enter fullscreen mode Exit fullscreen mode

Step 6: Create a DynamoDB Table for State Locking

A DynamoDB table is configured to enable state locking, which prevents concurrent updates to the state file:preventing two team members from running terraform apply on the same state file at the same time.

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "state-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 7: Configure Terraform Backend

The backend configuration connects Terraform to the S3 bucket and DynamoDB table, enabling remote state storage and locking:
After you configure a remote backend, Terraform will automatically load the state file from that backend every time you run plan or apply, and it’ll automatically store the state file in that backend after each apply, so there’s no chance of manual error.

terraform {
  backend "s3" {
    bucket         = "otu-bucket-state"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "state-locks"
    encrypt        = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 8: Output Key Resource Information

To easily reference key resources, outputs for the S3 bucket ARN and DynamoDB table name are defined:

output "s3_bucket_arn" {
  value       = aws_s3_bucket.terraform_state.arn
  description = "The ARN of the S3 bucket"
}

output "dynamodb_table_name" {
  value       = aws_dynamodb_table.terraform_locks.name
  description = "The name of the DynamoDB table"
}
Enter fullscreen mode Exit fullscreen mode

These variables will print out the Amazon Resource Name (ARN) of your S3 bucket
and the name of your DynamoDB table. Run terraform apply to see it:

Image description

How Each Step Aligns with Best Practices

  1. Remote State Storage: The S3 bucket ensures the state file is stored securely and accessible to the team.

s3

  1. State Locking: The DynamoDB table prevents simultaneous state modifications, avoiding conflicts.

state lock

  1. Versioning: S3 versioning allows recovery from accidental state file overwrites. You can the state files stored in the global workspace we created.

Image description

  1. Encryption: AES256 encryption secures the state file.
  2. Access Control: Public access to the S3 bucket is restricted to protect sensitive data.
  3. Lifecycle Management: The prevent_destroy rule ensures critical resources are not accidentally deleted.

Benefits of this Implementation

  • Security: Protects sensitive information in the state file.
  • Collaboration: Enables teams to work on the same Terraform project without conflicts.
  • Resilience: Ensures recovery from accidental modifications or deletions.

By following these steps and best practices, you can effectively manage Terraform state, enhancing the security, collaboration, and reliability of your infrastructure.

Top comments (0)