TL;DR
Here we'll retrieve a link from the AWS CloudFront, after sending a file to an AWS API Gateway that will trigger a Lambda function written in Golang to upload a file to S3, with the objective to know a bit more in practice about these AWS services.
Setup S3 and CloudFront
Let's start creating our bucket on S3, where we'll store the files sent from our lambda function later. Go to the AWS S3 service and create a new bucket with the default configuration.
Now we'll configure our CloudFront, a way to share our files with a low latency. You can see more about this service here. After clicking on "Create distribution", select the bucket created previously on S3 and create the control setting with the default parameters.
After finishing the creation of the distribution, it'll appear a warning message to update our bucket policies. Click to copy policy go to the link highlighted and paste the copied policy created on the session "Bucket policy".
To set our environment variables from AWS, we can use the command aws configure
from AWS CLI
Setup the environment locally
First, we'll set the environment to be able to test the lambda function that will be created locally, without uploading the code on every change on AWS. This configuration is important to make it possible to test all changes that can be made in the future in a faster way, to be sure our code is correct and, if we find some issue after uploading the code, we'll know the problem is on the AWS services configurations or permissions.
For that, we'll need to have the SAM CLI to simulate a serverless environment, and we'll create a template.yml
to being used (needs docker)
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An example AWS Serverless Application.
Resources:
MyApi:
Type: 'AWS::Serverless::Api'
Properties:
StageName: Prod
BinaryMediaTypes: # allow to send binary data on requests
- '*/*'
MyFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: yuta # Set your handler here (the binary)
Runtime: go1.x # Specify the Go runtime version
CodeUri: . # Path to your Go code
MemorySize: 128 # Set the desired memory size
Timeout: 10 # Set the function timeout
Events:
ApiEvent:
Type: Api # This connects the Lambda to an API Gateway
Properties:
Path: /myresource # Route triggering the function
Method: POST # Define the HTTP method (e.g., GET, POST)
Outputs:
ApiEndpoint:
Description: "API Gateway endpoint URL"
Value:
Fn::Sub: "https://${MyApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/myresource"
After that, let's create the code of the lambda function. Let's initialize the module on the same level as the template.yml
created:
go mod init github.com/your-github-user/your-repository-name
To start, we'll create the main file. I created inside the folder src
:
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handler) // Every time this lambda function is triggered (each request), it'll call the handler function, sending the event of the request.
}
func handler(ctx context.Context, event events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
fmt.Println("π Received event! Running...")
awsRegion := "your-aws-region" // Set your AWS region here
bucketName := "your-bucket-name" // Set the name of the bucket created on S3 here
fileHeaders, headersError := GetHeaders(event, 10485760)
if headersError != nil {
return BadRequestFeedback("Error reading headers: " + headersError.Error()), nil
}
fileName, uploadError := UploadFile(fileHeaders[0], awsRegion, bucketName)
if uploadError != nil {
return BadRequestFeedback("Error uploading file to S3: " + uploadError.Error()), nil
}
cloudFrontURL := fmt.Sprintf("%s/%s", "https://your-cdn.cloudfront.net", fileName) // Set the distribution name created on CloudFront here
return OkFeedback("β
File uploaded successfully to CDN: " + cloudFrontURL), nil
}
Here on GetHeaders
helper function, we'll validate some headers sent to the gateway and get the file information.
package main
import (
"encoding/base64"
"fmt"
"mime"
"mime/multipart"
"net/http"
"strings"
"github.com/aws/aws-lambda-go/events"
)
func GetHeaders(lambdaReq events.APIGatewayProxyRequest, maxFileSizeBytes int64) ([]*multipart.FileHeader, error) {
headers := http.Header{}
for header, values := range lambdaReq.Headers {
headers.Add(header, values)
}
contentType := headers.Get("Content-Type")
if contentType == "" {
return nil, fmt.Errorf("Content-Type header not found")
}
_, params, err := mime.ParseMediaType(contentType)
if err != nil {
return nil, err
}
boundary := params["boundary"]
if boundary == "" {
return nil, fmt.Errorf("boundary not found in Content-Type header")
}
stringReader := strings.NewReader(lambdaReq.Body)
b64Reader := base64.NewDecoder(base64.StdEncoding, stringReader)
multipartReader := multipart.NewReader(b64Reader, boundary)
form, err := multipartReader.ReadForm(maxFileSizeBytes) // Receiving the form data
if err != nil {
return nil, err
}
var files []*multipart.FileHeader
for _, currentFileHeaders := range form.File {
files = append(files, currentFileHeaders...)
}
return files, nil
}
On the UploadFile
helper function, we'll create a session on AWS and send the file to the AWS
package main
import (
"bytes"
"mime/multipart"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
type UploadRes struct {
S3Path string
S3URL string
}
func UploadFile(fileHeader *multipart.FileHeader, region string, bucket string) (string, error) {
file, openError := fileHeader.Open()
if openError != nil {
return "", openError
}
var fileContents bytes.Buffer
_, readError := fileContents.ReadFrom(file)
if readError != nil {
return "", readError
}
awsSession, sessionError := session.NewSession(&aws.Config{ // Creating a session
Region: aws.String(region),
})
if sessionError != nil {
return "", sessionError
}
uploader := s3manager.NewUploader(awsSession)
fileName := fileHeader.Filename
_, uploadError := uploader.Upload(&s3manager.UploadInput{ // Uploading the file to AWS S3
Bucket: aws.String(bucket),
Key: aws.String(fileName),
Body: bytes.NewReader(fileContents.Bytes()),
})
if uploadError != nil {
return "", uploadError
}
return fileName, nil
}
For giving the error and success feedback in a JSON format, we can create a helper function, receiving the message by parameters.
package main
import "github.com/aws/aws-lambda-go/events"
const (
StatusOK = 200
StatusBadRequest = 400
)
func OkFeedback(message string) events.APIGatewayProxyResponse {
return events.APIGatewayProxyResponse{
StatusCode: StatusOK,
Headers: map[string]string{
"Content-Type": "application/json",
},
Body: `{"message": "` + message + `"}`,
}
}
func BadRequestFeedback(message string) events.APIGatewayProxyResponse {
return events.APIGatewayProxyResponse{
StatusCode: StatusBadRequest,
Headers: map[string]string{
"Content-Type": "application/json",
},
Body: `{"message": "` + message + `"}`,
}
}
Since we're using the Go AWS SDK, run go mod tidy
to install them. The folder structure created looks like this:
Running locally
Now, we can run our application, building the binary of the code and starting the application locally with go build -o yuta ./src/* && sam local start-api
. Since I have said my handler file would be "yuta" on template.yml
, I used the command to build the binary with the name "yuta".
Now the server should be running on the terminal, and we're able to send the request to the lambda function now, with a POST request to the route, as a multipart/form-data
Uploading the Lambda function to AWS
Now we need to create a zip
to send our created code to AWS Lambda. Zip the binary code created with go build -o yuta ./src/*
and let's upload it on Lambda!
On the Lambda service, create a function using the Go runtime. On the code source, click on upload a zip file and upload the zip of the binary code generated before. Change the Handler on Runtime settings to the name of the zip file - in this case, it would be "yuta" since it's the name of my zip file.
Now we need a way to trigger this serverless function. In this case, we'll create an API Gateway to send the data to the Lambda function and, when a request is sent to this route mapped on API Gateway, it'll activate our function. Click to add trigger, select the API Gateway, and create an HTTP API with security open.
In this way, you will receive an error 403 (access denied), because Lambda doesn't have access to upload files to your S3. Go to configuration, permissions, and click on the role name, to be redirected to the policy created on IAM. There, we'll add a new permission as an inline policy, selecting the S3 service to give the PutObject
access.
After that, using the link of our API Gateway, we'll be able to send the file, upload it on S3, and receive the link of the uploaded file with the CloudFront prefix.
Next steps
- After each change to our code we need to build, zip, and upload manually again on S3, what about creating a CI/CD pipeline to automatize this for us, like AWS CodeBuild, or GitHub Actions
- We could configure CloudWatch service to be able to see logs with more details of what's causing errors. On this project, I had some problems after uploading the file, and with the CloudWatch logs, it was very easy to fix them.
- What about dockerizing the application?
- We could use environment variables to make the main information more "dynamic"
- We could add more file validations, like validating the file size, and upload more files at the same time using goroutine in a folder and return the link of this folder
Links
Conclusion
And that's it! We went through some services of AWS to know a bit more about this platform in practice in a simple way, using services like Lambda function, API Gateway, S3, and CloudFront.
C u soon! π
Top comments (3)
Oh!! great.
Greate article π.
This is amazing