DEV Community

Cover image for Deploying Amazon API Gateway and Lambda with Terraform

Deploying Amazon API Gateway and Lambda with Terraform

AWS provides multiple ways to deploy infrastructure and code onto their platform, with differing levels of complexity and scalability.

One of the most effective and safe ways is to use Infrastructure as Code (IaC), to define the required resources in a repeatable template that allows you to deploy your Infrastructure and keep it updated, while tracking any changes to ensure the environment stays the way you need it.

AWS provides native CloudFormation for deploying code, however this post will concentrate on Terraform. Each type of IaC has its advantages and disadvantages, as do others like CDK, Serverless Framework, Pulumi and more.

I've come to like Terraform for its more programming like features over CloudFormation, though I do still use CloudFormation in some cases. I also use CDK in some situations, it depends on the nature of what you are deploying.

Architecture

In this example, we'll be deploying an Amazon API Gateway with links to a Lambda Function.

The files for deploying this example are available on my GitHub:
https://github.com/uzusan/apigateway-tf-blogpost

This Lambda will emulate typical CRUD operations (Create, Read, Update, and Delete) as if it was connected to a database, mapping the operations to HTTP requests for POST, GET, PUT and DELETE methods respectively.

Each request into the API Gateway is routed to the Lambda Function (These could be separate Lambdas for each operation) to perform the specific function. In this example, it will just return a JSON object with a text string showing what would be done in a full integration.

API Gateway to Lambda with CRUD architecture

Below is the Lambda Function we will be deploying:

import json
from datetime import datetime

def handler(event, context):
    # Get the HTTP method from the event
    http_method = event['httpMethod']

    # Prepare the response based on the HTTP method
    message_map = {
        'GET': 'This would GET (read) an item from the database',
        'POST': 'This would POST (create) a new item in the database',
        'PUT': 'This would PUT (update) an existing item in the database',
        'DELETE': 'This would DELETE an item from the database'
    }

    message = message_map.get(http_method, 'Unsupported HTTP method')

    # Return the response
    return {
        'statusCode': 200,
        'headers': {
            'Content-Type': 'application/json'
        },
        'body': json.dumps({
            'message': message,
            'method': http_method,
            'timestamp': datetime.now().isoformat()
        })
    }
Enter fullscreen mode Exit fullscreen mode

API Gateway

The API Gateway has integrations with the Lambda via a proxy.

API Gateways are made up of 3 levels, the Stage (Production in this case), the resources, then the integrations.

In this example, we are sending requests to the Production Stage (you could also host dev or test on the same endpoint for example), accessing the items resource (our main endpoint for dealing with our expected database), then we have multiple functions for interactions with those resources, for each request type.

API Gateway Detail

Terraform

In this example, we'll be deploying the Terraform from our local machine, however in a separate blog post, i'll detail how to set up a CI/CD pipeline to deploy this automatically via CodePipeline and CodeBuild.

To start we'll need the following installed locally:

  • AWS CLI v2
  • Terraform

In this example, we'll also need to pass in an existing S3 bucket to use for the state file for Terraform, allowing us to keep the state in a centralised place. This example doesn't cover state locking with DynamoDB or the more recent S3 State File locking directly, it just uses a single state file, which if only being used by one developer at a time is sufficient for our purposes.

We'll use the following files from the Github repo https://github.com/uzusan/apigateway-tf-blogpost:

  • index.py
  • lambda_function.zip (a zip file containing the above index.py)
  • main.tf

Terraform main.tf

For this example, i'll keep everything in one place to make it easier to understand, but typically you would have separate variables, outputs and maybe a backend file. For more info on Terraform best practices, HashiCorp have a good guide here: https://developer.hashicorp.com/terraform/language/modules/develop/structure

Providers and Backend

First we set up the AWS Provider to allow us to create resources on AWS and our S3 backend, so that our state file that tracks the state of resources can be stored on S3:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }

  # S3 backend configuration
  backend "s3" {
    bucket = "BUCKETNAME_TO_BE_REPLACED"
    key    = "api-lambda/terraform.tfstate"
    region = "eu-west-1"
    encrypt = true
  }
}
Enter fullscreen mode Exit fullscreen mode

The BUCKETNAME_TO_BE_REPLACED should be replaced with a suitable S3 Bucket (this can also be created via a separate Terraform File). Our key is just a unique name for the state file, and in this case i've put the region as eu-west-1, which i'll be using throughout.

Next we have to setup our IAM Permissions, including a role which the Lambda will assume, and a policy attachment to attach the AWS Basic Execution Role policy to our new role:

# IAM role for Lambda
resource "aws_iam_role" "lambda_role" {
  name = "crud_lambda_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# IAM policy for CloudWatch Logs
resource "aws_iam_role_policy_attachment" "lambda_logs" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
Enter fullscreen mode Exit fullscreen mode

The aws_iam_role resource type will create an IAM role for us, and we can create an assume role policy that allows the Lambda Service to assume this role. Next we create a aws_iam_role_policy_attachment that allows us to attach the AWSLambdaBasicExecutionRole, which will allow the Lambda to run and to write logs to CloudWatch.

Next we deploy the Lambda using the aws_lambda_function resource. This method uses a local file (lambda_function.zip) and creates the source code to be uploaded using the source_code_hash attribute.

We also use the earlier role ARN (Amazon Resource Name) to refer to the lambda_role we just created.

We use the handler attribute to point the main execution entry point for Lambda to be our handler function (index.handler refers to the index.py python file, and the handler function def handler(event, context): defined in it.

# Lambda function
resource "aws_lambda_function" "crud_lambda" {
  filename          = "lambda_function.zip"
  function_name     = "crud_operations"
  role              = aws_iam_role.lambda_role.arn
  handler           = "index.handler"
  runtime           = "python3.9"

  source_code_hash = filebase64sha256("lambda_function.zip")
}
Enter fullscreen mode Exit fullscreen mode

Next we create our API Gateway, using the aws_api_gateway_rest_api resource type:

# API Gateway
resource "aws_api_gateway_rest_api" "crud_api" {
  name        = "crud-api"
  description = "CRUD API Gateway"
}
Enter fullscreen mode Exit fullscreen mode

We then create the items resource using the aws_api_gateway_resource resource type:

# API Gateway resource
resource "aws_api_gateway_resource" "items" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  parent_id   = aws_api_gateway_rest_api.crud_api.root_resource_id
  path_part   = "items"
}
Enter fullscreen mode Exit fullscreen mode

Passing in the rest api we just created and with a parent_id of the root resource (we can use this to nest resources by pointing to another id other than root) and have the URL path have items.

We then want to set up our methods for GET, POST, UPDATE and DELETE, using the aws_api_gateway_method resource:

# GET method
resource "aws_api_gateway_method" "get" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "GET"
  authorization = "NONE"
}

# POST method
resource "aws_api_gateway_method" "post" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "POST"
  authorization = "NONE"
}

# PUT method
resource "aws_api_gateway_method" "put" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "PUT"
  authorization = "NONE"
}

# DELETE method
resource "aws_api_gateway_method" "delete" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "DELETE"
  authorization = "NONE"
}
Enter fullscreen mode Exit fullscreen mode

For each we pass in the API Gateway ID, the items resource ID and the method to be used (GET, PUT etc).

For each of these methods, we then need to set up a Proxy to route calls to the Lambda (these all go to the same Lambda but could go to different Lambda functions) using the aws_api_gateway_integration resource type:

# Lambda integration for GET
resource "aws_api_gateway_integration" "lambda_get" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.get.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for POST
resource "aws_api_gateway_integration" "lambda_post" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.post.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for PUT
resource "aws_api_gateway_integration" "lambda_put" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.put.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for DELETE
resource "aws_api_gateway_integration" "lambda_delete" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.delete.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}
Enter fullscreen mode Exit fullscreen mode

For each method, we pass in the rest api ID, the items resource ID and the http method used (we take this from each of the aws_api_gateway_method's we just set up.

Each integration type uses POST as we are passing the parameters passed in from API Gateway over to Lambda as a POST with parameters. The original method will be passed over as part of the event message (which you can see in index.py where we extract it with http_method = event['httpMethod'] ). We also pass in the type of Proxy and the Lambda's URI, which we can get as the invoke_arn from the function we created earlier.

Next we need to allow the API Gateway and Lambda to talk to each other:


# Lambda permission for API Gateway
resource "aws_lambda_permission" "api_gw" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.crud_lambda.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_api_gateway_rest_api.crud_api.execution_arn}/*/*"
}
Enter fullscreen mode Exit fullscreen mode

Here we allow the API Gateway to invoke the Lambda we've created and pass in the specific function name and only allow the principal (the user/system allowed to use this permission) to be api gateway. We also restrict the source to being the API Gateway API we just created (all stages and resources, with the //, we could restrict this to dev for example with /dev/*).

Finally we create an API Gateway Deployment, then attach the Stage to that deployment using the aws_api_gateway_deployment and aws_api_gateway_stage resource types:

# API Gateway deployment
resource "aws_api_gateway_deployment" "crud_deployment" {
  depends_on = [
    aws_api_gateway_integration.lambda_get,
    aws_api_gateway_integration.lambda_post,
    aws_api_gateway_integration.lambda_put,
    aws_api_gateway_integration.lambda_delete
  ]

  rest_api_id = aws_api_gateway_rest_api.crud_api.id
}

# API Gateway stage
resource "aws_api_gateway_stage" "crud_stage" {
  deployment_id = aws_api_gateway_deployment.crud_deployment.id
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  stage_name    = "prod"
}
Enter fullscreen mode Exit fullscreen mode

In aws_api_gateway_deployment we technically only need to pass in the ID of the rest api to be deployed, but here we add a depends on, to ensure each of the aws_api_gateway_integration resources are created first, before we deploy the API, ensuring we don't deploy with empty integrations.

We then create a stage with aws_api_gateway_stage to create our production stage, passing in the rest api and the deployment we just created.

Finally we output the API Gateway URL so we can do some testing:

# Output the API Gateway URL
output "api_url" {
  value = "${aws_api_gateway_stage.crud_stage.invoke_url}/items"
}
Enter fullscreen mode Exit fullscreen mode

Deploying the Terraform

Now that we have the main.tf file created, and our index.py file for the Lambda, we can deploy to AWS.

First step is to ensure we have the AWS CLI (v2) installed, and Terraform installed. I won't cover these here, but if you make sure you can communicate with AWS via the CLI (use aws configure to check your credentials are correct if using access keys), then run the following:

terraform init
Enter fullscreen mode Exit fullscreen mode

This should then configure terraform, and you should see output similar to the below (Make sure you see the s3 backend message, if it's not present, you may not have correctly set up your AWS CLI):

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.67.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Enter fullscreen mode Exit fullscreen mode

You can then create a plan for Terraform:

terraform plan
Enter fullscreen mode Exit fullscreen mode

If this is the first time running, you should see a lot of creation messages such as:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_api_gateway_deployment.crud_deployment will be created
  + resource "aws_api_gateway_deployment" "crud_deployment" {
      + created_date  = (known after apply)
      + execution_arn = (known after apply)
      + id            = (known after apply)
      + invoke_url    = (known after apply)
      + rest_api_id   = (known after apply)
    }
....
Enter fullscreen mode Exit fullscreen mode

One for each of the resources we are going to deploy. If there are no errors we can run apply:

terraform apply
Enter fullscreen mode Exit fullscreen mode

After which you should see creation messages:

aws_iam_role.lambda_role: Creating...
aws_api_gateway_rest_api.crud_api: Creating...
aws_api_gateway_rest_api.crud_api: Creation complete after 1s

...

Apply complete! Resources: 16 added, 0 changed, 0 destroyed.

Outputs:

api_url = "URL"
Enter fullscreen mode Exit fullscreen mode

At the end, we will have an API URL output that we can use to test the API.

Using the API URL, we can export as an env variable to save typing it each time:

export API_URL="https://XXXXX.execute-api.eu-west-1.amazonaws.com/prod/items"

This allows us to then run each method:

GET:

curl -X GET $API_URL
Enter fullscreen mode Exit fullscreen mode

Which returns:

{"message": "This would GET (read) an item from the database", "method": "GET", "timestamp": "2025-02-01T04:23:08.867240"}%
Enter fullscreen mode Exit fullscreen mode

POST

curl -X POST $API_URL \
  -H "Content-Type: application/json" \
  -d '{"name": "Test Item", "description": "This is a test item"}'
Enter fullscreen mode Exit fullscreen mode

Which returns:

{"message": "This would POST (create) a new item in the database", "method": "POST", "timestamp": "2025-02-01T04:26:15.214840"}%
Enter fullscreen mode Exit fullscreen mode

UPDATE

curl -X PUT $API_URL \
  -H "Content-Type: application/json" \
  -d '{"name": "Updated Item", "description": "This item has been updated"}'
Enter fullscreen mode Exit fullscreen mode

Which returns:

{"message": "This would PUT (update) an existing item in the database", "method": "PUT", "timestamp": "2025-02-01T04:28:16.492926"}%
Enter fullscreen mode Exit fullscreen mode

DELETE

curl -X DELETE $API_URL
Enter fullscreen mode Exit fullscreen mode

Which returns:

{"message": "This would DELETE an item from the database", "method": "DELETE", "timestamp": "2025-02-01T04:28:49.461580"}%
Enter fullscreen mode Exit fullscreen mode

Conclusion

This example can be used to expand other types of resources or Lambda functions deployed via Terraform, and in the next Blog Entry we will use this Terraform and deploy it via a Code Pipeline with a CodeBuild instance, allowing us to automate the execution of the Terraform when triggered by a commit to a GitHub repository.

Top comments (0)