This article is a follow-up to my previous post, article How to Build and Test a PDF Generator Lambda Using LocalStack on Your Local Machine
In that article, I walked you through how to build and test a PDF generator Lambda function locally using mock AWS services—AWS Lambda, Elastic Container Registry (ECR), and S3 buckets—via LocalStack. We used the awslocal
CLI to provision the required infrastructure.
In this follow-up, we'll take a different approach. Instead of using the CLI, we'll use Terraform, an Infrastructure as Code (IaC) tool, to provision our resources. This guide will be simple and straightforward, so basic Terraform experience is all you need to follow along.
If you haven’t read the previous article yet, I highly recommend checking it out first as it covers the PDF generation logic itself, which we won’t be repeating here.
Let's get started
Prerequisites
Before proceeding, ensure you have the following installed:
- Docker
- Python 3.9+
- LocalStack account
- LocalStack CLI
- LocalStack Desktop
- Terraform
1: Restart the LocalStack Container
To ensure a clean environment and avoid any conflicts with previously provisioned services, you need to restart your LocalStack container. This step is important because it allows us to reuse the resource names used in the last article.
You can terminate the currently running container by pressing CTRL + C
command , or you can stop the container directly using the LocalStack Desktop application. Then restart the container by running this command DEBUG=1 localstack start
.
2. Install the terraform-local package
In the root directory named localstack
, please activate your virtual environment if it is not already active.
Then, install the terraform-local
package, which is a wrapper script that allows you to use Terraform to deploy resources to your LocalStack environment. This tool simplifies using Terraform locally with LocalStack, providing a smooth integration between the two.
pip install terraform-local
3. Creating Infrastructure
Next, create a new folder called infrastructure
. This folder will house the following files:
locals.tf
versions.tf
main.tf
scripts/build_and_push_lambda_image.sh
Let's take a closer look at each of these files and their roles in setting up the infrastructure.
1. locals.tf
locals {
aws_region = "eu-west-1"
pdf_bucket_name = "my-pdf-bucket"
pdf_ecr_name = "pdf-generator-image"
pdf_generator_lambda = {
function_name = "pdf-generator-lambda"
}
}
In the locals.tf file, we define some basic variables that will be used throughout our Terraform configuration. Here, we specify the AWS region, the name of the S3 bucket, the name of the ECR repository, and the Lambda function name.
2. versions.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.59.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.6.2"
}
}
}
provider "aws" {
region = local.aws_region
}
In the versions.tf
file, we specify the required providers for our Terraform configuration. We define the AWS provider (hashicorp/aws) to interact with AWS services, and the random provider (hashicorp/random) to generate random values like unique IDs. We also configure the AWS provider to use the region defined in the locals.tf
file (local.aws_region
). This file ensures that Terraform uses the correct providers and versions when provisioning infrastructure.
3. main.tf
-
S3 Bucket
# Generate random string for the bucket name
resource "random_id" "pdf_bucket_suffix" {
byte_length = 6
}
# S3 bucket for storing the generated PDF files
resource "aws_s3_bucket" "pdf_bucket" {
bucket = "${local.pdf_bucket_name}-${random_id.pdf_bucket_suffix.hex}"
force_destroy = true
lifecycle {
prevent_destroy = false
}
tags = {
Name = "${local.pdf_bucket_name}"
}
}
We first create a random_id
resource to generate a random string, which will be appended to the name of the S3 bucket, ensuring it is unique. The byte_length = 6
specifies the length of the random string. Then the aws_s3_bucket
resource is created using the bucket name defined by the local variable (local.pdf_bucket_name
) combined with the random suffix. We also configure the bucket to allow forced destruction (force_destroy = true
) which means it can be deleted even if it contains objects. Finally, the lifecycle block includes prevent_destroy = false
setting, which ensures that Terraform won’t block the destruction of the bucket if needed.
-
Elastic Container Registry
# ECR repository for storing the Docker images
resource "aws_ecr_repository" "pdf_generator_ecr" {
name = local.pdf_ecr_name
}
The aws_ecr_repository
resource is used to create an ECR repository where the Docker image for the Lambda function will be stored. The repository name is defined using the local variable local.pdf_ecr_name
.
-
Build and Push Image to ECR
# Run your existing image push script using null_resource
resource "null_resource" "run_build_and_push_image_script" {
provisioner "local-exec" {
command = "bash ${path.module}/scripts/build_and_push_lambda_image.sh"
environment = {
ECR_REPO_URI = aws_ecr_repository.pdf_generator_ecr.repository_url
}
}
depends_on = [
aws_ecr_repository.pdf_generator_ecr # Ensure the repository exists before running the script
]
}
Since we cannot create an empty lambda function, we would have to ensure we have an existing image before the lambda function itself is created. So we can do that using the null_resource
block to call our script scripts/build_and_push_lambda_image.sh
that'll build the Docker image for our pdf generator function and deploys it to the ECR repository defined in the configuration. We'll add content to the script after completing the main.tf file.
In this null_resource
block, we pass the ECR_REPO_URI
environment variable to the build_and_push_lambda_image.sh
script, which allows it to dynamically use the correct Amazon ECR repository URL for pushing the built Docker image. The depends_on
array ensures that the script runs only after the ECR repository (aws_ecr_repository.pdf_generator_ecr
) is created, preventing any issues from attempting to push an image to a non-existent repository.
Disclaimer: The approach used in this tutorial is for educational purposes only. While Terraform is a powerful tool for provisioning infrastructure, using it to build and push Docker images is not advisable for production environments. In production, it's best to use a dedicated CI/CD pipeline to manage your application artifacts.
- IAM role
# IAM Role for Lambda Execution
resource "aws_iam_role" "pdf_generator_lambda_role" {
name = "ExecRole-${local.pdf_generator_lambda.function_name}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
}
The aws_iam_role.pdf_generator_lambda_role
IAM role allows the Lambda function to assume the necessary permissions for execution. It grants the Lambda service permission to interact with AWS resources securely by defining the assume_role_policy
with the appropriate permissions.
-
Lambda function
# Lambda Function for PDF Generation
resource "aws_lambda_function" "pdf_generator" {
function_name = local.pdf_generator_lambda.function_name
role = aws_iam_role.pdf_generator_lambda_role.arn
package_type = "Image"
image_uri = aws_ecr_repository.pdf_generator_ecr.repository_url
timeout = 60
memory_size = 128
architectures = ["arm64"]
environment {
variables = {
S3_BUCKET_NAME = aws_s3_bucket.pdf_bucket.bucket
}
}
depends_on = [
aws_ecr_repository.pdf_generator_ecr, # Ensure the ECR repo exists before pushing the image
null_resource.run_build_and_push_image_script # Ensure the script runs before Lambda is created
]
}
Here we define the pdf_generator
Lambda function, specifying the function name, IAM role, and container image from the ECR repository. The Lambda is allocated 128MB of memory and a 60-second timeout, using arm64
architecture. We set an environment variable for the S3 bucket name. The depends_on
block ensures that the Lambda is created only after the ECR repository exists and the Docker image is built and pushed.
Note: If your local development machine or the environment where you’re building the Lambda function uses a different architecture, such as x86_64
(common for many Linux or Windows machines), you can adjust the architectures
array value from arm64
to x86_64
.
4. scripts/build_and_push_lambda_image.sh
#!/bin/bash
# Get the ECR URI from the environment variable
ECR_REPO_URI=$ECR_REPO_URI
# Now you can use the $ECR_REPO_URI in the script
echo "ECR URI: $ECR_REPO_URI"
# get the current directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Navigate to the lambda function directory
echo "Navigating to lambda function directory..."
cd $SCRIPT_DIR/../../pdf-generator-lambda || exit 1
echo "Current directory: $(pwd)"
# Build the docker image
echo "Building the docker image..."
docker build --no-cache -t lambda-container-image .
# Tag the image with the ECR repository URI
echo "Tagging the docker image with the ECR repository URI..."
docker tag lambda-container-image $ECR_REPO_URI
# Push the image to the ECR repository
echo "Pushing the docker image to the ECR repository..."
docker push $ECR_REPO_URI
echo "Script execution completed."
This script builds the PDF generator Lambda function as a Docker image, tags it, and pushes it to the specified ECR repository. The ECR_REPO_URI
is passed as an environment variable from the null_resource
block in the main.tf
file.
This completes the necessary Terraform configuration. Before we proceed to create the infrastructure using the installed tflocal
package, let's make a small change to the upload_file.py
script that handles uploading the generated PDF file to the S3 bucket. Previously, we hardcoded the bucket-name
; now, we'll retrieve it from the environment as an environment variable instead.
upload_file.py
import os
.....
S3_BUCKET_NAME = os.environ.get("S3_BUCKET_NAME")
.....
Import the os
module and modify the line above so that the script uses the S3_BUCKET_NAME
environment variable instead of relying on a hardcoded value.
Now we are ready to create our infrastructure. From inside the infrastructure
directory, run
tflocal init
This will initialize Terraform and generate some files once it runs successfully. Next, run
tflocal plan
This will return an overview of all the AWS resources that will be provisioned and set up in the active LocalStack container. You should see a line similar to: Plan: 6 to add, 0 to change, 0 to destroy.
Finally, run
tflocal apply
to create the resources. This will display an overview similar to what you saw earlier when running tflocal plan
, and it will ask you to confirm whether you want to proceed. Simply type yes
, and all your resources will be created in LocalStack. For a sanity check, you can use either the AWS CLI or the LocalStack Desktop to verify that the following resources have been created: S3 bucket
, ECR repo
, and Lambda function
.
Now, let's invoke our Lambda function. Navigate back to the pdf-generator-lambda
directory and run the following command:
awslocal lambda invoke --function-name pdf-generator-lambda --payload file://event.json /tmp/lambda.out
You should receive a success message like this:
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
You can confirm that the PDF was successfully generated and stored in the S3 bucket by navigating to the bucket via the LocalStack Desktop and downloading the PDF.
Congratulations! 🎉
You’ve successfully set up and tested your infrastructure using Terraform and LocalStack. Thank you for reading!
Top comments (0)