DEV Community

Cover image for Detect Inappropriate Content with AWS Rekognition
Yaroslav Yarmoshyk
Yaroslav Yarmoshyk

Posted on

Detect Inappropriate Content with AWS Rekognition

Introduction

In this article I want to describe how to use AWS Rekognition service to detect and block the images that doesn't comply with your content policy and can affect the reputation of your website, leading to user dissatisfaction, loss of trust, and potential legal issues.

This case is applicable to any website that allows users to upload images with subesequent location of these images in the S3 bucket.

Let’s say you have a website with the feature of file uploads. Users can publish inappropriate images or videos that can affect the reputation of your website, leading to user dissatisfaction, loss of trust, and potential legal issues.

Another possible case is related to Educational Platforms where one of your teachers accidentally uploads his home videos instead of learning material.

  • Problem: User-uploaded images may contain explicit or harmful content such as nudity, violence, hate symbols or other inappropriate material. If such images remain publicly accessible, they can violate platform policies, offend users and potentially lead to legal issues.
  • Solution: use AWS Rekognition to automatically detect such content and move it into a secure, non-public location. The system ensures that inappropriate content is promptly removed from public access.

This is more a solution design rather then How-To implementation guide.

Disclaimer

AWS Rekognition is a comprehensive image and video analysis service offered by Amazon Web Services (AWS). It is powered by a deep learning technology and requires no machine learning expertise to use.

It provides object and scene detection, allowing for the identification of various elements within images and videos. Facial analysis and recognition features allows it to detect faces, emotions and even celebrity recognition, making it useful for security and personalization applications.

Additionally, content moderation tools help automatically identify and filter inappropriate or explicit content, ensuring compliance and safety. The following article will be primarily focused on using the recognition for content moderation.

For those of you who is looking for more detailed information, you can visit the AWS Rekognition Overview and check its Key Features.

Automated Content Moderation With AWS Recognition

Data Flow Summary

  1. Upload Path: UserCloudFront → Public S3 Bucket
  2. Processing Path: CloudWatch EventLambda FunctionAWS Rekognition
  3. Response Path: If flagged → Move to Secure S3 Bucket + SES Email Notification

Automated image analysis With AWS Recognition

  1. User Interaction:
    • Action: A user uploads an erotic image.
    • Storage: The image is stored in a public Amazon S3 bucket, which is accessible via Amazon CloudFront for content delivery.
  2. Event Trigger:
    • Service: Amazon CloudWatch Event Rule
    • Function: Scheduled to trigger a Lambda function every hour, ensuring periodic checks of newly uploaded content.
  3. Lambda Function Execution:
    • Language: Python
    • Tasks:
      • File Retrieval: The function scans the S3 bucket to identify and list files that have been uploaded in the last 62 minutes.
      • Content Analysis: For each identified image, the Lambda function invokes AWS Rekognition to analyze the content against predefined labels. The the DetectLabels operation is used
  4. AWS Rekognition Analysis:
    • Labels Checked:
      • Detected Nudity
      • Violence
      • Gambling
      • Rude Gestures
      • Hate Symbols
      • Drugs & Tobacco
      • Alcohol Use
      • Exposed Buttocks or Anus
      • Explicit Nudity
      • Explicit Sexual Activity
      • Obstructed Intimate Parts
    • Outcome: Determines whether any of the specified labels are present in the image.
  5. Conditional Handling Based on Analysis:
    • If Labels are Detected:
      • File Management: The image is moved from the public S3 bucket to a secure, non-public S3 bucket to prevent further public access. You can not set ACLs on an object level. This will not block access to the object. You can either move it into a non-public folder of the existing s3 bucket or into another S3 Bucket without public access.
      • Notification: An email notification is sent to recognized@example.com using Amazon Simple Email Service (SES), alerting the relevant parties about the detection.

Speaking about the possible list of labels that can be detected

Customers can download the list of supported labels and object bounding boxes from our documentation page or from the 'Label detection' tab of the Amazon Rekognition Console. In addition, on the Rekognition console, customers can use a search bar to easily check whether their label is already supported or not. Using the same interface, customers can request new labels that they would like Amazon Rekognition to support, or provide any other product feedback.

External links:

  1. Analyzing images stored in an Amazon S3 bucket
  2. Code examples for Amazon Rekognition

Top comments (0)