Are you a cloud Architect or Cloud Engineer or DevOps that setup S3 backed by AWS Cloudfront, and you have been struggling with Invalidating the cache anytime there is a new feature upload to S3? This task is not just tasking but can be so insignificant.
In this blog, we will be discussing two ways of achieving our goal.
- Invalidate the cache by triggering a lambda using S3 event triggers.
- Invalidate the cache by triggering a lambda using CodePipeline.
Prerequisites
You should already have a website setup on S3 and configured to use the CloudFront distribution. If not, click here to learn.
For approach 2,
Invalidate the cloudfront cache using CodePipeline, you must already have CodePipeline setup for your frontend deployment. If not, click here to learn.
Approach 1: Invalidate the cache by triggering a lambda using S3 event triggers
Step 1: Create a lambda function
Select Python as your runtime environment and a new execution role for your lambda.
Use the below lambda code and replace it with your distribution ID and object path to be invalidated.
import json
import boto3
cloud_front = boto3.client("cloudfront")
def lambda_handler(event, context):
# Extract information from S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
# Specify the CloudFront distribution ID
distribution_id = "your_cloudfront_distribution_id"
# Specify the object path to be invalidated (e.g., '/*' to invalidate all)
object_paths = ["/*"]
try:
# Create CloudFront invalidation
cloud_front.create_invalidation(
DistributionId=distribution_id,
InvalidationBatch={
'Paths': {
'Quantity': len(object_paths),
'Items': object_paths,
},
'CallerReference': str(hash(key)), # You can use a different reference based on your needs
}
)
except Exception as e:
print(f"Error: {e}")
raise e
Step 2: Assign required permissions
Go to your lambda IAM role and attach an inline policy for lambda to invalidate cache. Replace ACCOUNT_ID and DISTRIBUTION_ID with your values.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::ACCOUNT_ID:distribution/DISTRIBUTION_ID"
}
]
}
Step 3: Configure S3 event trigger
Navigate to your S3 bucket > Properties and create event notifications.
Select your lambda function as the destination and click save.
Approach 2:
Invalidate the cache by triggering a lambda using CodePipeline.
In this approach, the steps are pretty similar, but instead of S3 event notifications, our lambda function will be triggered by CodePipeline itself.
Step 1: Create Lambda function
Give a relevant name, select Python as a runtime environment, and select Create a new role.
Paste the below code in your lambda function and deploy it. There is no need to change anything in the code.
import json
import boto3
code_pipeline = boto3.client("codepipeline")
cloud_front = boto3.client("cloudfront")
def lambda_handler(event, context):
job_id = event["CodePipeline.job"]["id"]
try:
user_params = json.loads(
event["CodePipeline.job"]
["data"]
["actionConfiguration"]
["configuration"]
["UserParameters"]
)
cloud_front.create_invalidation(
DistributionId=user_params["distributionId"],
InvalidationBatch={
"Paths": {
"Quantity": len(user_params["objectPaths"]),
"Items": user_params["objectPaths"],
},
"CallerReference": event["CodePipeline.job"]["id"],
},
)
except Exception as e:
code_pipeline.put_job_failure_result(
jobId=job_id,
failureDetails={
"type": "JobFailed",
"message": str(e),
},
)
else:
code_pipeline.put_job_success_result(
jobId=job_id,
)
Step 2: Assign required permissions
Go to your lambda IAM role and attach an inline policy for lambda to invalidate cache. Replace ACCOUNT_ID and DISTRIBUTION_ID with your values.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"cloudfront:CreateInvalidation"
],
"Resource": [
"arn:aws:logs:us-east-1:ACCOUNT_ID:*",
"arn:aws:cloudfront::ACCOUNT_ID:distribution/DISTRIBUTION_ID"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"codepipeline:PutJobFailureResult",
"codepipeline:PutJobSuccessResult"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:InvalidationCachePipeline:*"
}
]
}
Step 3: Configure CodePipeline
In order to automate the cache invalidation process, we will be adding a new stage to our code pipeline. Navigate to your frontend pipeline and click on Edit button.
Click on “Add stage” to add a new stage after deploying content to Amazon S3.
Name the stage “Cache-Invalidate” and add an action group.
Specify action name of your choice
Action Provider: AWS Lambda
Region: Your region
Input Artifacts: Select buildartifact
Function Name: Select your function
User Parameters: Specify your distribution ID and object path to be invalidated.
{"distributionId": "1234567890","objectPaths":["/*"]}
Click done, and save your pipeline. Now, it’s time it's done, whenever the pipeline is triggered the Cloudfront will invalidate once the deployment is done.
Hurray😍 our workflow is working as expected and our lambda function created new invalidation for our cloudfront distribution.
Top comments (0)