I’d like to speak about AWS Lambdas and the best ways for Gofers to deal with the topic.
Firstly, let's define the stack. Let it be an AWS Lambda with an Application Load Balancer (ALB) as a trigger and some DynamoDB resources as a database. Why ALB and not API Gateway? The answer is simple: the AWS API Gateway is not always appropriate. For example, if you need to limit access to your application to the enterprise’s internal network, it is better to set up ALB. In addition, setting up a gateway is simpler and takes the intrigue out of the narrative! Now, I'm good back here if you all care to continue. For the beginning, let’s create a simple «Hello, world» template for our AWS Lambda.
package main
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"net/http"
)
func router(req events.ALBTargetGroupRequest) (events.ALBTargetGroupResponse, error) {
switch req.HTTPMethod {
case "GET":
return show(req)
case "POST":
return show(req)
default:
return clientError(http.StatusMethodNotAllowed)
}
}
func show(req events.ALBTargetGroupRequest) (events.ALBTargetGroupResponse, error) {
m := make(map[string]string)
m["content-type"] = "text/plain"
m["x-lambda-response"] = "true"
return events.ALBTargetGroupResponse{
StatusCode: http.StatusOK,
Body: "Hello, world!",
IsBase64Encoded: false,
Headers: m,
}, nil
}
func clientError(status int) (events.ALBTargetGroupResponse, error) {
return events.ALBTargetGroupResponse{
StatusCode: status,
Body: http.StatusText(status),
}, nil
}
func main() {
lambda.Start(router)
}
Well done! Now we can build our executable and upload it to our AWS account using the web interface.
I suggest creating the Arm64 build for cost-cutting reasons.
GOARCH=arm64 GOOS=linux go build -tags lambda.norpc -o bootstrap -ldflags "-s -w" main.go && zip -5 bootstrap.zip bootstrap
Note that we use the modern Amazon Linux 2023 runtime instead of Go 1.x, which is deprecated. Pay attention to the fact that the default name of your executable must be «bootstrap».
All this sounds good, but our purpose is to automate the deployment! Thus, let’s take a look at the AWS Cloud Development Kit (CDK). Luckily, it supports Golang.
Firstly, install awscli and aws-cdk with their auxiliary dependencies. Do this in a way that is convenient for you, according to the operating system. Let's imagine that we are using Ubuntu.
sudo apt install awscli
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh
source ~/.bashrc
nvm list-remote
nvm install lts/hydrogen //or your preferred version
node - -version
npm install -g aws-cdk
cdk version
Now, go to the folder with your project and run cdk init app --language go. Cdk init will create all the files needed for your deployment, including two *.go files. The first of them is for your CDK code; another one (named *test.go) is for tests. So, the tool assumes that you might want to add a test stage to your pipeline. The particular names of the files depend on your directory’s name. Let’s think that your go code is in a dedicated src/ subfolder.
Now, we can start with Golang CDK. Let the code, with some appropriate comments, speak up!
package main
import (
"github.com/aws/aws-cdk-go/awscdk/v2"
"github.com/aws/aws-cdk-go/awscdk/v2/awsdynamodb"
"github.com/aws/aws-cdk-go/awscdk/v2/awsec2"
"github.com/aws/aws-cdk-go/awscdk/v2/awselasticloadbalancingv2"
"github.com/aws/aws-cdk-go/awscdk/v2/awselasticloadbalancingv2targets"
"github.com/aws/aws-cdk-go/awscdk/v2/awsiam"
"github.com/aws/aws-cdk-go/awscdk/v2/awslambda"
"github.com/aws/aws-cdk-go/awscdk/v2/awslogs"
"github.com/aws/aws-cdk-go/awscdklambdagoalpha/v2"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/constructs-go/constructs/v10"
"github.com/aws/jsii-runtime-go"
"log"
"os"
)
type LambdaDynamodbAlbStackProps struct {
awscdk.StackProps
}
func NewLambdaDynamodbAlbStack(scope constructs.Construct, id string, props *LambdaDynamodbAlbStackProps) awscdk.Stack {
tags := make(map[string]string)
tags["stackType"] = "Backend Lambda stack"
tags["CMDBENV"] = "PROD"
// all your custom tags here
awstTag := aws.StringMap(tags)
awsRegion := os.Getenv("AWS_REGION")
awsAccount := os.Getenv("AWS_ACCOUNT")
// do not hardcode your settings. Get it from your CICD pipeline
// otherwise, if you do not use CICD, define your region and account explicitly
sprops := &awscdk.StackProps{Description: jsii.String("Backend Lambda stack"),
Tags: &awstTag,
Env: &awscdk.Environment{
Region: jsii.String(awsRegion),
Account: jsii.String(awsAccount),
},
CrossRegionReferences: jsii.Bool(true),
}
stack := awscdk.NewStack(scope, &id, sprops)
// declare a stack
// create AmazonDynamoDBFullAccess role
applicationRole := awsiam.NewRole(stack, aws.String("BLDynamoDBFullAccessRole"), &awsiam.RoleProps{
AssumedBy: awsiam.NewServicePrincipal(aws.String("lambda.amazonaws.com"), &awsiam.ServicePrincipalOpts{}),
RoleName: jsii.String("BLDynamoDBFullAccessRole"),
ManagedPolicies: &[]awsiam.IManagedPolicy{
awsiam.ManagedPolicy_FromManagedPolicyArn(stack, aws.String("AmazonDynamoDBFullAccess"), aws.String("arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess")),
},
})
// add policies
applicationRole.AddToPolicy(awsiam.NewPolicyStatement(&awsiam.PolicyStatementProps{Actions: jsii.Strings("ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface", // needed for VPC creation
"logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", // needed for logs
),
Resources: jsii.Strings("*"),
}))
vpcID := "vpc-000000000000001"
subnets := []*string{jsii.String("subnet-0000000000001"), jsii.String("subnet-0000000000002"), jsii.String("subnet-0000000000003")}
sub0 := awsec2.Subnet_FromSubnetId(stack, jsii.String("subnet0"), subnets[0])
sub1 := awsec2.Subnet_FromSubnetId(stack, jsii.String("subnet1"), subnets[1])
sub2 := awsec2.Subnet_FromSubnetId(stack, jsii.String("subnet2"), subnets[2])
zones := []*string{jsii.String("ca-central-1a"), jsii.String("ca-central-1b"), jsii.String("ca-central-1c")}
// you need to work with a VPC. Let’s think that your organization has several already configured VPCs. We will attach one to the project.
// also, you need to describe zones. This information you can see via web interface in the EC2 configuration section.
vpc := awsec2.Vpc_FromVpcAttributes(stack, jsii.String(vpcID), &awsec2.VpcAttributes{
AvailabilityZones: &zones,
VpcId: jsii.String(vpcID),
PrivateSubnetIds: &subnets,
})
// Nota bene! You can use awsec2.Vpc_FromLookup() instead, but this will fail if your organization’s VPC has more than 16 attached subnets, even if you use less in your CDK script! You can see the certain amount via web interface.
// create lambda function with previously created AmazonDynamoDBFullAccess role
lambdaFunction := awscdklambdagoalpha.NewGoFunction(stack, jsii.String("BackendLambda"), &awscdklambdagoalpha.GoFunctionProps{
Runtime: awslambda.Runtime_PROVIDED_AL2(),
Entry: jsii.String("./src"),
Architecture: awslambda.Architecture_ARM_64(),
Bundling: &awscdklambdagoalpha.BundlingOptions{
GoBuildFlags: jsii.Strings(`-ldflags "-s -w"`),
},
FunctionName: jsii.String("BackendLambda"),
Description: jsii.String("Backend Lambda function"),
Role: applicationRole,
Vpc: vpc,
MemorySize: jsii.Number(128),
EphemeralStorageSize: awscdk.Size_Mebibytes(jsii.Number(512)),
})
var table awsdynamodb.Table
dynamodbTableName := "BackendDynamoDB"
tableEx, err := tableExists(dynamodbTableName)
if err != nil {
log.Fatalln("CDK script error:", err)
}
// once created, your dynamodb table cannot be re-created via CDK. So, you may get the «table already exists» error. You have to drop table manually when needed, or use this check:
if !tableEx {
// create DynamoDB table
table = awsdynamodb.NewTable(stack, jsii.String(dynamodbTableName), &awsdynamodb.TableProps{
BillingMode: awsdynamodb.BillingMode_PROVISIONED,
TableName: jsii.String(dynamodbTableName),
PartitionKey: &awsdynamodb.Attribute{
Name: aws.String("LockID"),
Type: awsdynamodb.AttributeType_STRING,
},
SortKey: &awsdynamodb.Attribute{Name: aws.String("MyAttribute"), Type: awsdynamodb.AttributeType_STRING},
TimeToLiveAttribute: jsii.String("TTL"),
})
table.GrantReadWriteData(lambdaFunction)
} else {
log.Println("Dynamodb table exists")
}
awslogs.NewLogGroup(stack, jsii.String("BackendLambdaLogGroup"), &awslogs.LogGroupProps{
Retention: awslogs.RetentionDays_TWO_WEEKS, // Adjust retention as needed
LogGroupName: jsii.String("/aws/lambda/" + *lambdaFunction.FunctionName()),
})
// work with the target group
targetGroup := awselasticloadbalancingv2.NewApplicationTargetGroup(stack, jsii.String("BackendTargetGroup"), &awselasticloadbalancingv2.ApplicationTargetGroupProps{
TargetGroupName: jsii.String("BLTargetGroup"),
TargetType: awselasticloadbalancingv2.TargetType_LAMBDA,
Vpc: vpc,
Targets: &[]awselasticloadbalancingv2.IApplicationLoadBalancerTarget{awselasticloadbalancingv2targets.NewLambdaTarget(lambdaFunction)},
})
// Create the Application Load Balancer
alb := awselasticloadbalancingv2.NewApplicationLoadBalancer(stack, jsii.String("BackendALB"), &awselasticloadbalancingv2.ApplicationLoadBalancerProps{
Vpc: vpc,
InternetFacing: jsii.Bool(false),
VpcSubnets: &awsec2.SubnetSelection{
Subnets: &[]awsec2.ISubnet{sub0, sub1, sub2},
SubnetType: "",
},
})
// attach some existent SSL certificate
certArn := "arn:aws:acm:ca-central-1:0000000000:certificate/00000000-0000-0000-0000-00000000"
// listen HTTPS 443
alb.AddListener(jsii.String("BackendHttpsListener"), &awselasticloadbalancingv2.BaseApplicationListenerProps{
Certificates: &[]awselasticloadbalancingv2.IListenerCertificate{awselasticloadbalancingv2.NewListenerCertificate(jsii.String(certArn))},
DefaultTargetGroups: &[]awselasticloadbalancingv2.IApplicationTargetGroup{targetGroup},
Port: jsii.Number(443),
Protocol: awselasticloadbalancingv2.ApplicationProtocol_HTTPS,
})
targetGroupArn := targetGroup.TargetGroupArn()
awscdk.NewCfnOutput(stack, jsii.String("Target group ARN"), &awscdk.CfnOutputProps{
Value: targetGroupArn,
Description: jsii.String("Target group ARN"),
})
servicePrincipalOpts := awsiam.ServicePrincipalOpts{
Conditions: &map[string]interface{}{
"ArnLike": map[string]*string{
"aws:SourceArn": targetGroupArn,
},
},
}
principal := awsiam.NewServicePrincipal(jsii.String("elasticloadbalancing.amazonaws.com"), &servicePrincipalOpts)
lambdaFunction.AddPermission(jsii.String("LambdaInvoke"), &awslambda.Permission{
Principal: principal,
})
// log lambda function ARN
awscdk.NewCfnOutput(stack, jsii.String("lambdaFunctionArn"), &awscdk.CfnOutputProps{
Value: lambdaFunction.FunctionArn(),
Description: jsii.String("Lambda function ARN"),
})
// log ALB DNS name
awscdk.NewCfnOutput(stack, jsii.String("ALB DNS Name"), &awscdk.CfnOutputProps{
Value: alb.LoadBalancerDnsName(),
Description: jsii.String("ALB DNS Name"),
})
return stack
}
func main() {
app := awscdk.NewApp(nil)
NewLambdaDynamodbAlbStack(app, "LambdaDynamodbAlbStack", &LambdaDynamodbAlbStackProps{
awscdk.StackProps{
Env: env(),
},
})
app.Synth(nil)
}
func env() *awscdk.Environment {
return nil
}
func tableExists(tableName string) (bool, error) {
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
svc := dynamodb.New(sess)
input := &dynamodb.DescribeTableInput{
TableName: aws.String(tableName),
}
_, err := svc.DescribeTable(input)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
if awsErr.Code() == dynamodb.ErrCodeResourceNotFoundException {
// Table doesn't exist
return false, nil
}
}
// Other error occurred
return false, err
}
// Table exists
return true, nil
}
Congrats! You’ve done it! Now, your project is ready to be published using the CDK deploy command. But one more thing: Your AWS account, of course. The credentials and config file are updated when you run the command aws configure. The credentials file is located at ~/.aws/credentials on Linux or macOS, or at C:\Users\USERNAME.aws\credentials on Windows. Surely, you can edit the config file manually. Also, you might want (or your organization forces you) to use the AWS SSO login approach. For this, run the aws configure sso command or edit your config file. Please take a look at the example config (SSO)
[profile myuser-it-account-dev]
sso_start_url = https://yourcompany-auth.awsapps.com/start
sso_region = ca-cantral-1
sso_account_id = 777777777777
sso_role_name = it-developer
region = ca-central-1
output = json
Edit this for your purposes and add to your ~/.aws/config
When ready, run cdk deploy at once. But if you use SSO, authorize yourself like this: aws sso login --profile myuser-it-account-dev. This redirects you to the page for the authorization. Then, in the same terminal session, run cdk deploy --profile myuser-it-account-dev. Read more official manuals about AWS SSO and its’ pros and cons for organizations.
Well done! But it is time to remember our CDCI plans! We will use GitLab in our example. The main condition is to make sure that the runner is installed for the project. GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline. The issue of configuring the runner is beyond the scope of this article. The reason for this is that the topic is pretty wide itself, and, on the other hand, the issue is more in the field of DevOps but not pure devs. One of the important things on the developer’s side is to configure the CICD token for the runner. To do this, go to the Settings -> CICD -> runners section. Here, you can work with your registration information.
Now, it’s time to create the file named.gitlab-ci.yml. This is a conventional name for the YAML file, which describes the pipeline.
Let the code speak for itself:
stages:
- preparation
- deploy
retrieve_temp_credentials:
stage: preparation
script:
- >
printf "AWS_ACCESS_KEY_ID=%s\nAWS_SECRET_ACCESS_KEY=%s\nAWS_SESSION_TOKEN=%s"
$(aws sts assume-role
--role-arn ${AWS_ROLE_PROD}
--role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
--duration-seconds 3600
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
--output text) >> assume_role.env
tags:
- PROD
- it-dep
- backend-eks-runner
artifacts:
reports:
dotenv: assume_role.env
image: public.ecr.aws/aws-cli/aws-cli:latest
deploy_using_cdk:
stage: deploy
script:
- echo $AWS_ACCESS_KEY_ID
- go mod download
- ls -la
- npm update -g
- cdk deploy --require-approval ${CDK_APPROVAL_LEVEL}
tags:
- PROD
- it-dep
- backend-eks-runner
needs:
- job: retrieve_temp_credentials
artifacts: true
image: public.ecr.aws/evergen-co/cdk-go-pipeline:latest
Let me briefly comment on this script. The script uses the assume-role approach. Why? The assume role method allows us to control privileges using IAM and does not care about user credentials storage or credentials rotation in our CICD settings. To use this approach, you have to go to the IAM division of the AWS console and create a dedicated role for your pipeline. Copy it’s ARN and then save it as a variable (let it be AWS_ROLE_PROD for the production environment) in the Gitlab CICD settings: Settings -> CICD -> variables. The pipeline will use this ARN for the assumed role process. Our YAML script consists of two sequential jobs: preparation (retrieve_temp_credentials) and deployment (deploy_using_cdk). So, retrieve_temp_credentials. Here we call the assume role procedure and retrieve the temporary credentials. The public.ecr.aws/aws-cli/aws-cli:latest Docker image is used to ensure the latest AWS cli software is on board. The information is received as plain text and then parsed into three variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. The variables have then been saved into a physical file, assume_role.env. Do not try to avoid this! The variables are not mutual for all the stages and cannot be shared between them without being stored physically. Also, you can work with JSON format if it seems more convenient for you. Here is some example code:
export TEMP_CREDENTIALS=$(aws sts assume-role
--role-arn ${AWS_ROLE}
--role-session-name "GitLabCI"
--duration-seconds 900
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
--output json)
- export AWS_ACCESS_KEY_ID=$(echo $TEMP_CREDENTIALS | jq -r .[0])
- export AWS_SECRET_ACCESS_KEY=$(echo $TEMP_CREDENTIALS | jq -r .[1])
- export AWS_SESSION_TOKEN=$(echo $TEMP_CREDENTIALS | jq -r .[2])
Surely, in this case, you have to care about how to install the jq package in your Docker image.
Now, let’s go forth. deploy_using_cdk stage uses the public.ecr.aws/evergen-co/cdk-go-pipeline:latest docker image. It contains both Golang and CDK packages needed for our purpose. The go mod download command fulfills the repo with the necessary dependencies. Otherwise, you can not store your go mod files in the repo and run go mod init and go mod tidy within the pipeline instead. npm update -g ensures that the CDK package is up-to-date. The command cdk deploy --require-approval ${CDK_APPROVAL_LEVEL} needs some comments. The CDK process needs some manual approvals to continue when a significant change is detected by the tool. By deploying in the terminal manually, you have the ability to press Y or N yourself. Using automation, this behavior is unwanted. So, I recommend creating the variable CDK_APPROVAL_LEVEL in the CICD settings and defining the needed behavior. For «no questions» behavior, set the «never» value. You can change this for some debug purposes later.
My congratulations! Well done!
Finally, we have to speak about the AWS IAM role. We assume that all we need is Lambda, DynamoDB, and ALB. Also, we use a ready VPC and SSL certificate from your organization (if not, ask DevOps to configure it or just do it yourself).
Here are the needed permissions: All the policies except AWS_CDK_CloudFormation_Lambda are AWS built-in presets. Hence, simply add them to your role. The AWS_CDK_CloudFormation_Lambda role needs some explanation. Go to the policies section of the IAM settings panel and create a new policy. You can set the AWS_CDK_CloudFormation_Lambda name or any other you prefer. Add such permissions to the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCDK0",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::777777777777:role/*"
},
{
"Sid": "AWSCDK1",
"Effect": "Allow",
"Action": [
"ec2:DescribeVpnGateways"
],
"Resource": "*"
},
{
"Sid": "AWSCDK2",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::cdk-xxxxxxxx-assets-777777777777-ca-central-1",
"arn:aws:s3:::cdk-xxxxxxxx-assets-777777777777-ca-central-1/*"
]
}
]
}
What is arn:aws:iam::777777777777:role/* ? This is NOT the role for assume role, but your current role you have logged in = AWS role in your CDK. See here:
Env: &awscdk.Environment{
Region: jsii.String(awsRegion),
Account: jsii.String(awsAccount),
},
I recommend passing the role and region from CICD settings (variables section). In my case, I have created the AWS_ACCOUNT and AWS_REGION roles accordingly. This allows me to make changes without needing to edit the code itself.
What is arn:aws:s3:::cdk-xxxxxxxx-assets-777777777777-ca-central-1? When a DevOps configures a runner, he has to create an appropriate AWS CDK toolkit role and an associated S3 bucket for asset building purposes. The role above is what I am speaking about. In your case, you have to make an investigation or ask your DevOps about the appropriate ARN. Note that all these aspects are unnecessary if you deploy manually from the terminal with the cdk deploy command. Now, when all the errors are fixed and everything is done as needed, your AWS Lambda function will be deployed on every commit push.
Ok, we have almost finished. The last thing I would like to say is that the CICD script can be improved by adding processing to the cases depending on the test or production branches, etc. Also, I recommend you go to the Route53 section of the AWS console and attach the domain name according to your certificate. Route53 allows you to attach the domain directly to your Lambda by its name. Feel free to perform experiments and improvements.
Good luck!
Top comments (1)
Nice narrative, you've covered the entire pipeline.
Have you explored frameworks like Nitric which automate the containerization and provisioning of the infra required to support your lambdas?