DEV Community

Rasanpreet Singh
Rasanpreet Singh

Posted on

Automate CloudWatch Logs Export to S3 using Lambda and Event Bridge

Overview

You can consolidate the logs from every system, application, and AWS service you use into a single, highly scalable service by using AWS CloudWatch Logs. For each log group, you may modify the retention policy, maintain indefinite retention, or select a retention time between one day and ten years.

If your organization has to report on CloudWatch data for a period longer than the designated retention duration, exporting CloudWatch data to an S3 bucket is a critical step in many hybrid environment scenarios. As S3 offers extremely long-lasting storage, it may also be connected with other monitoring or logging systems (like Microsoft Sentinel, for example). Additionally, keeping logs in CloudWatch with an extended retention period might be costly.

There are two ways to push CloudWatch Logs to S3:

Manual process

Using the AWS Management Console, AWS Command Line Interface (CLI), or an SDK, we may export log data to S3 directly without using any other AWS service. This technique is straightforward and perfect when we just need to export logs once. However, using this strategy, we are unable to automatically move CloudWatch logs to S3 after a specific interval. To learn more, kindly click the following link: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html

Automated process

In this article, we'll go over a step-by-step procedure for utilizing a Lambda function and event bridge to automate the process of exporting CloudWatch log group to an S3 bucket. Here, we've utilized a lambda function with an event bridge trigger on it to automate the export of CloudWatch logs to an S3 bucket as per schedule on the trigger when run every time, which automates the process.

Prerequisites

· We have one CloudWatch log group where logs are ingested through any AWS service or application.
· One Amazon S3 bucket with the same region as the cloud watch log group.
· One IAM lambda role with rights for S3, CloudWatch, and lambda execution.

Image description

The architecture diagram illustrates the complete deployment architecture, which includes a lambda function, an event bridge rule, CloudWatch logs and an S3 bucket.

Steps walkthrough

Create Lambda Function

Go to the AWS Lambda dashboard > Functions> Create function > Choose Author from scratch. Give function a name. Choose runtime as Python 3.7

Image description

Under permissions, choose an existing execution role with CloudWatch and S3 permissions and create a lambda function.
Please refer to the below snippet for lambda role permissions.

Image description

After creating the lambda function, go to Code window and put the following code there. After updating the following variables as per your environment,
GROUP_NAME = The name of the CloudWatch log group for which you want to export data.
DESTINATION_BUCKET = The name of the S3 bucket for the exported log data
PREFIX = the prefix used to begin each export object's key



import boto3
import os
import datetime

GROUP_NAME = "CloudWatch log group"
DESTINATION_BUCKET = "S3-bucket"
PREFIX = "S3-Prifix"
NDAYS = 1
nDays = int(NDAYS)

currentTime = datetime.datetime.now()
StartDate = currentTime - datetime.timedelta(days=nDays)
EndDate = currentTime - datetime.timedelta(days=nDays - 1)

fromDate = int(StartDate.timestamp() * 1000)
toDate = int(EndDate.timestamp() * 1000)

BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))

def lambda_handler(event, context):
    client = boto3.client('logs')
    response = client.create_export_task(
         logGroupName=GROUP_NAME,
         fromTime=fromDate,
         to=toDate,
         destination=DESTINATION_BUCKET,
         destinationPrefix=BUCKET_PREFIX
        )
    print(response)


Enter fullscreen mode Exit fullscreen mode

The lambda code screen will look as below

Image description

Create CloudWatch Event bridge rule.

Go to CloudWatch dashboard > Events > Create rule. Give the rule name. Choose rule type as schedule and select next.

Image description

In the schedule pattern option, set the rate expression to run every 5 minutes and go next.

Image description

Next, for target type, select AWS Service. Select the lambda function as the target and mention the lambda name that we created in the above steps. Verify the configurations and create a rule.

Image description

In order to allow CloudWatch to store items in the target S3 bucket, update the bucket policy. Add the following bucket policy with the correct S3 bucket ARN and AWS region details.



{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "uploadcloudwatchlogs",
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-east-1.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::s3-bucket"
        },
        {
            "Sid": "uploadcloudwatchlogs",
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-east-1.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3::: s3-bucket/*"
        }
    ]
}


Enter fullscreen mode Exit fullscreen mode

Now the lambda function will execute every 5 minutes and push the CloudWatch logs to S3 buckets. After successful lambda execution, you can view the CloudWatch logs exported to the S3 bucket.

Image description

Top comments (0)