DEV Community

Cover image for DevSecOps with AWS- IaC at scale - Getting started.
Alejandro Velez for AWS Community Builders

Posted on • Edited on

DevSecOps with AWS- IaC at scale - Getting started.

Level 300

Nowadays, the demand of modernization and migration application to new architecture patterns transform the traditional way in that the IT resources are managed, this fact creates a gap between the business requirements and IT staff’s capacity. The adoption of cloud and DevOps/DevSecOps solve some problems but still the necessity continues increasing and the industry gap indicates that the main problem isn’t the agility to delivery new solutions the main problem is the knowledge and centralized expertise in a reduce team.

Cloud Experts
Figure 1. Cloud Experts and Application users.

For other hand, there are many tools that help and improve the developer experience and abstract and reduce the complexity to create and manage cloud native applications and cloud apps, the continuous revolution and business transformation require that agility and self-service capabilities to reduce the gap between infrastructure team and developers’ team. The infrastructure automation maturity path suggests the next step is platform engineering, however, the steps to adopt this model require a structure methodology, DevOps/DevSecOps maturity level and lean product management practices to be successful.

The ecosystem grows every day in this series you can learn more about these tools, some best practices and thinking out the box, you will find some examples and how apply DevSecOps practices to IaC.

For this series suppose that you are a DevSecOps engineer, and you are in an owner team of a new application based on microservices orchestrated by Amazon ECS and need to deploy a complete infrastructure for an application in multiples environments. The platform team already defines the tools for IaC deployments, Terraform, and creates custom reusable modules. You must follow the standard guidelines, compliance and grant the best practices, but you have autonomy for writing your code, managing your pipelines, and finding the proper way to use the platform team resources.

There are many questions before beginning this journey:

  • How can I select the proper tool for me?
  • How can we select the proper tool for standardizing the organization deployments?
  • Does a universal way exist to create manage and governance IaC?

The first step is selecting the proper tool b*ased on the organization requirements* but also in the engineer profile, there isn’t a unique tool that make everything and a tool that you prefer maybe doesn’t make sense for your partners, for example if your are developer expert in python and you need to deploy your app quickly but secure maybe you don’t need to learn other languages o syntax as HCL o handling YAML templates. In the following diagram you can watch a simple flow to select a proper tool based on decentralized Devops and embedded DevOps Operation model.

This image isn’t an exhaustive tool list.

Operational Models
Figure 2. Operational model and tools.

For other hand, another several questions arise:

  • What practices I should use for managing version control?
  • Should I use mono repo for infrastructure definition or multiple repositories one for each layer?
  • How can I write less code?
  • How can I avoid DRY?
  • Which are the bests practices for managing multiple environments?
  • Which is the best structure for the repository?

Let’s do this and keep simple!
The first step is understanding the architecture pattern for the project. Figure 3 depicts the high-level architecture.

In this series the main topic is the infrastructure rather than the container and orchestration practices.

ECS pattern
Figure 3. High level application architecture.

In this image you can watch a multi-availability zone deployment in an environment, a DevSecOps Account for CI/CD tools using CodeCatalyst, a shared Account for ECR images an Github as repository for the source code.

Hands On

Requirements

• Terragrunt >= 0.48.4
• AWS CLI >= 2.7.0
• Terraform >= 1.5.4
• Terratest >= 7.1.3
• go >=1.20.7
• checkov >= 2.1.229

AWS Services

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). Select any OCI-compliant container image, define memory and compute resources, and run the container with serverless compute.
Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.
Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance.
Amazon CodeCatalyst is an integrated service for software development teams adopting continuous integration and deployment practices into their software development process. CodeCatalyst puts the tools you need all in one place. You can plan work, collaborate on code, and build, test, and deploy applications with continuous integration/continuous delivery (CI/CD) tools.

Now, let’s response the questions:

  • What practices I should use for managing version control?

It is a crucial question, for this scenario and the recommendation for IaC deployments is to use trunk base development. However, several teams use feature-based development rather than trunk-based development.

  • Should I use mono repo for infrastructure definition or multiple repositories one for each layer?
    This is an important question, depending on the team’s structure, number of resources, metrics and time to deploy a new change into an environment, continuous testing practices etc. How ever to keep simple, you can manage entire infrastructure into one repository this add more complexity but can be simplify how the next blogs will explain.

  • How can I write less code? And how can I avoid DRY?

For accomplishing this requirement, you can use a wrapper for terraform such terragrunt or terramate. For demo purposes the wrapper is terragrunt. In another post you will explore terramate.

  • Which are the bests practices for managing multiple environments?

This is another common question and depends on your preferences. The recommendation is using terraform workspaces, however many teams use a folder structure, others use branches, and others use environment vars to keep isolate the environments and reuse the same code.

For demo purposes the environment variables will be the first approach later the terraform workspaces will be implemented.

  • Which is the best structure for the repository?

The best structure depends on how you prefer to see the infrastructure code if as application layers, business domain layers or infrastructure logical resources. For this example, the code will be separated into infrastructure logical resources.

It’s time to create some code! ✍️

Suppose that the platform team is the open source community that creates public modules. Be careful with these modules and your organizational requirements.

This is the final project structure:

.
├── README.md
├── common
├── docs
├── infrastructure
├── modules
└── terragrunt.hcl

4 directories, 2 files

Enter fullscreen mode Exit fullscreen mode

You will find 4 directories and 2 files:

  • common: contains the common.tfvars, environment.hcl file to setup de environment for project and variables.tf with de description of common.tfvars, and common.hcl with the parameters used for setting providers and tfstate configurations for terragrunt.
  • docs: contains documentation, images and more.
  • infrastructure: contains the infrastructure definitions grouped by layers.
  • modules: contains local modules if them are necessary.
  • terragrunt.hcl file in the root, define the terraform cli behavior, hooks and generate blocks for remote state configuration and provider.tf.
locals {
  common_vars = read_terragrunt_config("${get_parent_terragrunt_dir()}/common/common.hcl")
  environment = read_terragrunt_config("${get_parent_terragrunt_dir()}/common/environment.hcl")

}


terraform {
  extra_arguments "init_arg" {
    commands = ["init"]
    arguments = [
      "-reconfigure"
    ]
    env_vars = {
      TERRAGRUNT_AUTO_INIT = true
    }
  }
  extra_arguments "common_vars" {
    commands = get_terraform_commands_that_need_vars()
    arguments = [
      "-var-file=${get_parent_terragrunt_dir()}/common/common.tfvars"
    ]
  }

}

remote_state {
  backend = "s3"
  generate = {
    path      = "remotebackend.tf"
    if_exists = "overwrite_terragrunt"
  }
  config = {
    profile              = local.common_vars.locals.backend_profile
    region               = local.common_vars.locals.backend_region
    bucket               = local.common_vars.locals.backend_bucket_name
    key                  = "${local.common_vars.locals.project_folder}/${local.environment.locals.workspace}/${path_relative_to_include()}/${local.common_vars.locals.backend_key}"
    dynamodb_table       = local.common_vars.locals.backend_dynamodb_lock
    encrypt              = local.common_vars.locals.backend_encrypt

  }
}

generate = local.common_vars.generate
Enter fullscreen mode Exit fullscreen mode

You can view entire project here:

GitHub logo velez94 / terragrunt_ecs_blueprint

Terragrunt ecs blueprint for blogs demostration

Terragrunt AWS ECS Fargate Blueprint

Description

This project defines a classic architecture pattern in AWS using ECS, ALB, Fargate and network resources The template defines an architecture that allows through a load balancer to access containers deployed in an ECS cluster, containers have access to a Postgres database, in addition to these elements, there are network and security resources.

Architecture diagram

Architecture Diagram In this image you can watch a multi-avalilability zone deployment in an environment, a DevSecOps Account for CI/CD tools using CodeCatalyst, a shared Account for ECR images an Github as repository for the source code.

Dependency Diagram

Dependency Graph

Project structure

.
├── README.md
├── common
├── docs
├── infrastructure
├── modules
└── terragrunt.hcl

4 directories, 2 files

You will find 4 directories and 2 files:

  • common: contains the common.tfvars, environment.hcl file to setup de environment for project and variables.tf with de description of common.tfvars, and common.hcl with…

Step by step

  1. Configure AWS profile for backend and deployment account. You can use secret key and access key, but the recommendation is to use AWS Identity Center credentials. You need 2 profiles, one for remote state and another for deployment account but could be the same.

  2. Setup terragrunt.hcl files with your parameters. You only need to modify common.hcl and common.tfvars and setup TF_VAR_env.

Terragrunt will resolve the value of the option by first looking for the cli argument, then looking for the environment variable, then defaulting to the value specified in the config file.

The common/common.hcl- here the global properties for project.

Replace #{key}# for your parameters.

# Load variables in locals
locals {
  # Default values for variables
  project  = "#{project}#"

  # Backend Configuration
  backend_profile       = "labvel-devsecops"
  backend_region        = "#{backend_region}#"
  backend_bucket_name   = "#{backend_bucket}#"
  backend_key           = "terraform.tfstate"
  backend_dynamodb_lock = "#{dynamodb_backend}#"
  backend_encrypt       = true
  project_folder = "${local.project}"
}

generate "provider" {
  path      = "provider.tf"
  if_exists = "overwrite_terragrunt"
  contents  = <<EOF
variable "profile" {
  description = "Variable for credentials management."
  type        = map(map(string))
}

variable "env" {
  description = "Environment Value"
  type = string
  default = "default"
}

variable "project" {
  description = "Project name"
  type        = string
}

variable "required_tags" {
  description = "A map of tags to add to all resources"
  type        = map(string)
}

provider "aws" {
  region  = var.profile[var.env]["region"]
  profile = var.profile[var.env]["profile"]

  default_tags {
    tags = var.required_tags
  }
}
EOF
}


Enter fullscreen mode Exit fullscreen mode

The common.tfvars will be passed automatically for each module, it contains the values for terraform variables.

# Default values for deployment credentials
# Access profile in your IDE env or pipeline the IAM user to use for deployment."
# Don forget export env var --- export TF_VAR_env=dev or other value
profile = {
  default = {
    profile = "labvel-dev"
    region  = "#{backend_region}#"
  }
  "#{environment}#" = {
    profile = "labvel-dev"
    region  = "#{backend_region}#"
  }
}

# Project Variable
project = "#{project}#"

# Project default tags
required_tags = {
  Project   = "#{project}#"
  ManagedBy = "Terragrunt"
  Initiative = "BlogsAmbassador"
}
Enter fullscreen mode Exit fullscreen mode

The File common/environment.hcl define de value for the environment, this value is capture from environment variable TF_VAR_env.

locals { workspace = get_env("TF_VAR_env", "dev")  }
Enter fullscreen mode Exit fullscreen mode

Finally, common/variables.tf contains the description for variables, this file is just for informational purposes.

# General variables

# tflint-ignore: terraform_unused_declarations
variable "profile" {
  description = "Variable for credentials management."
  type        = map(map(string))
  default = {
    default = {
      profile = "deploy_account"
      region  = "us-east-1"
    }
    dev = {
      profile = "deploy_account"
      region  = "us-east-1"
    }
  }
}

# tflint-ignore: terraform_unused_declarations
variable "project" {
  description = "Project name"
  type        = string
  default     = "project_name"
}

# tflint-ignore: terraform_unused_declarations
variable "required_tags" {
  description = "A map of tags to add to all resources"
  type        = map(string)
  default = {
    Project   = "project_name"
    Owner     = "owner_name"
    ManagedBy = "Terraform-Terragrunt"
  }
}
# tflint-ignore: terraform_unused_declarations
variable "env" {
  description = "Environment Value"
  type = string
  default = "default"
}
Enter fullscreen mode Exit fullscreen mode

Almost ready!

3- Understanding the code.

Now, let’s explain a fragment of code related to relationship and dependencies using dependency block to keep your code more loosely coupled, flexible, and reusable.
For this use case, the security group definition is shown in the terragrunt.hcl it depends on VPC resource.

Dependency Example
Figure 4. Dependency example.

The dependency block is used to configure module dependencies. Each dependency block exports the outputs of the target module as block attributes you can reference throughout the configuration.
The locals block is used to define aliases for Terragrunt expressions that can be referenced within the configuration. In this block based on handling-environmental-variables-in-terraform-workspaces to select the popper values for each environment.



include "root" {
  path = find_in_parent_folders()
  expose = true
}

dependency "vpc" {
  config_path = "${get_parent_terragrunt_dir("root")}/infrastructure/Network/VPC"
  mock_outputs = {
    vpc_id = "vpc-04e3e1e302f8c8f06"
  }
  mock_outputs_merge_strategy_with_state = "shallow"
}

dependency "sg_alb" {
  config_path = "${get_parent_terragrunt_dir("root")}/infrastructure/Network/SecurityGroups/ALB"
  mock_outputs = {
    security_group_id = "sg-0417d4404d84892fa"
  }
  mock_outputs_merge_strategy_with_state = "shallow"
}

locals {
  env = {
    default = {
      create                   = false
      name                     = "${include.root.locals.common_vars.locals.project}-${include.root.locals.environment.locals.workspace}-ecs-sg"
      description              = "Security group for service with HTTP ports open for ALB"

      use_name_prefix          = false
      ingress_with_cidr_blocks = []
      egress_with_cidr_blocks = [
        {
          rule        = "all-tcp"
          cidr_blocks = "0.0.0.0/0"
        }
      ]

      egress_with_source_security_group_id = []
      tags = {
        Environment = include.root.locals.environment.locals.workspace
        Layer = "Networking"
      }
    }
    "dev" = {
      create = true
    }
  }
  environment_vars = contains(keys(local.env),  include.root.locals.environment.locals.workspace) ? include.root.locals.environment.locals.workspace : "default"
  workspace        = merge(local.env["default"], local.env[local.environment_vars])
}
terraform {
  source = "tfr:///terraform-aws-modules/security-group/aws?version=5.1.0"

}
inputs = {
  vpc_id                     = dependency.vpc.outputs.vpc_id
  security_groups_in_service = dependency.sg_alb.outputs.security_group_id


  create          = local.workspace["create"]
  name            = local.workspace["name"]
  description     = local.workspace["description"]
  use_name_prefix = local.workspace["use_name_prefix"]

  ingress_with_cidr_blocks = local.workspace["ingress_with_cidr_blocks"]
  egress_with_cidr_blocks  = local.workspace["egress_with_cidr_blocks"]

 ingress_with_source_security_group_id = [
        {
          from_port                = 80
          to_port                  = 80
          protocol                 = "tcp"
          description              = "Security Groups for inbound traffic to service"
          source_security_group_id = dependency.sg_alb.outputs.security_group_id
        }
      ]
  egress_with_source_security_group_id  = local.workspace["egress_with_source_security_group_id"]

  tags = local.workspace["tags"]
}

Enter fullscreen mode Exit fullscreen mode

Finally, you can deploy using the following steps:

$ cd infrastructure
$ TF_VAR_env=dev terragrunt run-all plan
$ TF_VAR_env=dev terragrunt run-all deploy
Enter fullscreen mode Exit fullscreen mode

You can see the details in the following recording:

You can watch the service into AWS Console.

AWS Services
Figure 5. AWS console.

For clean up just run:

$ cd infrastructure
$ TF_VAR_env=dev terragrunt run-all destroy
Enter fullscreen mode Exit fullscreen mode

In the next post you can learn how apply SAST, testing and more using Open-source tools and AWS services, apply AWS well architected practices and more.

Thanks for reading and share.

You also can read the following series: How to manage multiple environments with Terraform using Terragrunt

Top comments (0)