DEV Community

Yogesh Sharma
Yogesh Sharma

Posted on • Edited on

Redefining Possibilities: Choosing between AWS container services for Modern Architectures

In the realm of modern cloud architecture, containerization has emerged as a cornerstone for building scalable, efficient, and agile applications. Within the AWS ecosystem, two prominent services stand out for managing containers: Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Both offer distinct approaches to container orchestration, catering to diverse sets of requirements and preferences. In this guide, we'll embark on a journey to dissect these two services, exploring their strengths, nuances, and ideal use cases.

Why should you consider containers instead of virtual machines?

Containers and virtual machines are similar concepts. This means you can apply your knowledge gained from VMs to the world of containers. Both approaches start with an image to spin up a virtual machine or container. Of course, differences exist between the technologies. As a mental model, it helps to think of containers as lightweight virtual machines.
How often do you hear “but it works on my machine” when talking to developers? It is not easy to create an environment providing the libraries, frameworks, and runtime environments required by an application. Since 2013, Docker has made the concept of containers popular. As in logistics, a container in software development is a standardized unit that can be easily moved and delivered. In our experience, this method simplifies the development process significantly, especially when aiming for continuous deployment, which means shipping every change to test or production systems automatically.

In theory, you spin up a container based on the same image on your local machine, an on-premises server, and in the cloud. Boundaries exist only between UNIX/Linux and Windows, as well as Intel/AMD and ARM processors. In contrast, it is much more complicated to launch an Amazon Machine Image (AMI) on your local machine.

Containers also increase portability. In our opinion, it is much easier to move a containerized workload from on-premises to the cloud or to another cloud provider. But beware of the marketing promises by many vendors: it is still a lot of work to integrate your system with the target infrastructure.

Dockerfile, a configuration file containing everything needed to build a container image.

Comparing different options to run containers on AWS

It’s important to consider the specific requirements of your application and use case:

  • EKS is a managed Kubernetes service that allows you to easily deploy, manage, and scale containerized applications on the AWS cloud. It provides a fully managed Kubernetes control plane and integrates with other AWS services such as Elastic Load Balancing and Amazon RDS. If you are already familiar with Kubernetes and want to leverage its flexibility and scalability, EKS is a good choice. EKS
  • ECS is a container orchestration service that allows you to deploy, manage, and scale containerized applications on the AWS cloud. It provides a simple and easy-to-use interface for deploying and managing containerized applications and integrates with other AWS services such as Elastic Load Balancing and Amazon RDS. If you are looking for a simpler and more integrated service that is tightly integrated with the AWS ecosystem, ECS is a good choice. ECS
  • ROSA (Red Hat OpenShift Service on AWS) is a fully managed service that allows you to run Red Hat OpenShift clusters on the AWS cloud. It provides a simple and easy way to deploy, manage, and scale containerized applications using Kubernetes and Red Hat OpenShift. It also provides integration with other AWS services such as Amazon EBS for storage and Amazon VPC for networking. If you are looking for a fully managed service that provides the full functionality of Red Hat OpenShift and benefits from the security, compliance, and global infrastructure of AWS, ROSA is a good choice. rosa
ECS EKS
Fully managed or self-managed? Fully managed container orchestration service Fully managed Kubernetes control plane only
Is it open-source? ECS is a proprietary AWS product with limited extensibility EKS is based on open-source Kubernetes APIs, native tooling, and a lot of supported add-ons, boosting extensibility
Support included? Support plans, documentation, and training available from AWS's technical team Limited control plane level support available from AWS, Community-based support system for everything related to Kubernetes
AWS Fargate Compatibility Yes Yes
Smalles compute unit Task Pod
Ease of use Doesn’t require a control plane as it is a native offering, and is designed to run with minimal resource provisioning,. After initial cluster setup, developers can easily configure and deploy tasks directly from the admin console. Requires a fair amount of configuration and experience to set up and run. Kubernetes has a steep learning curve.
Monitoring Built-in monitoring with AWS CloudWatch, Container Insights, and supports third-party monitoring tools AWS CloudWatch, Container Insights, and CloudTrail support built-in monitoring, custom install other open source utility like Grafana/Prometheus in the EKS cluster
Multi cloud compatibility No, AWS only Yes, Multi Cloud
Highly available capabilities Yes (A three Availability Zone spread of EC2 instances in ECS cluster) Yes (Amazon EKS runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability)
Cost pay as you use ( ECS does not have a fixed cost per cluster.) $0.10 per hour per running cluster, or $70 for a month of continuous operation
Recommendation If you’re new to container orchestration and deployment, ECS is a good place to start because it is less expensive, and requires little or no expertise in managing Kubernetes clusters. If you are looking for multi-cloud capabilities and portability of containerized workloads, EKS is the preferred choice because it doesn’t lock you into the Amazon cloud. EKS also provides additional features, more customization options, and fine-grained control over containerized applications.

Lets Build & Deploy apps on EKS

Now we have some understanding on different container services, lets see what all steps involve to deploy a sample app on EKS:

  • Install and Configure eksctl- Install eksctl, a command-line tool for creating and managing EKS clusters:

    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    sudo mv -v /tmp/eksctl /usr/local/bin
    eksctl version
    # Enable eksctl bash-completion
    eksctl completion bash >> ~/.bash_completion
    . /etc/profile.d/bash_completion.sh
    . ~/.bash_completion
    # Enable kubectl bash_completion
    kubectl completion bash >>  ~/.bash_completion
    . /etc/profile.d/bash_completion.sh
    . ~/.bash_completion
    # Verify the installation:
    eksctl version
    
  • Create an Amazon EKS cluster by using eksctl- Use the following command to create an EKS cluster named my-eks-cluster in the us-east-1 region, with a managed node group of type t2.micro consisting of 2 nodes (adjust as needed).

    eksctl create cluster \
    --name my-eks-cluster \
    --version 1.24 \
    --region us-east-1 \
    --nodegroup-name standard-workers \
    --node-type t2.micro \
    --nodes 2 \
    --nodes-min 1 \
    --nodes-max 3 \
    --managed
    
  • Verify the Amazon EKS cluster- To verify that the cluster is created and that you can connect to it, run the kubectl get nodes command.

  • Create an Amazon ECR repository.

  • Create a project in any IDE (the app code structure)

    python docker file Fig1. sample python project directory structure

    java docker Fig2. sample java project directory structure

  • Create a Dockerfile to create docker image like this:

    # Use an official Python runtime as a parent image
    FROM python:3.9-slim
    # Set the working directory in the container
    WORKDIR /app
    # Copy the requirements file into the container at /app
    COPY requirements.txt /app/
    # Install any needed packages specified in requirements.txt
    RUN pip install --no-cache-dir -r requirements.txt
    # Copy the current directory contents into the container at /app
    COPY . /app
    # Define environment variable
    ENV NAME World
    # Make port 80 available to the world outside this container
    EXPOSE 80
    # Define the command to run your application
    CMD ["python", "app.py"]
    
  • Build and push the Docker image using the following commands:

    aws ecr get-login-password --region <region>| docker login --username <username> --password-stdin <account_number>.dkr.ecr.<region>.amazonaws.com
    docker buildx build --platform linux/amd64 -t hello-world:v1 .
    docker tag hello-world:v1 <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1
    docker push <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1
    
  • Deploy the app microservice by creating the deployment file- Create a YAML file called deployment.yaml based on this example deployment file code.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample-app-deployment
    spec:
    replicas: 3
    selector:
    matchLabels:
      app: sample-app
    template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
        - name: sample-app
          image: your-image-repository/sample-app:latest
          ports:
            - containerPort: 80```
    
    
  • Deploy the microservices on the Amazon EKS cluster: run the kubectl apply -f deployment.yaml command.

  • Verify the status of the pods: run the kubectl get pods command and wait for the status to change to Ready.

  • Create a Service: Create a file called service.yaml based on this example service file code.

    apiVersion: v1
    kind: Service
    metadata:
    name: sample-app-service
    spec:
    selector:
    app: sample-app
    ports:
    - protocol: TCP
      port: 80
      targetPort: 80
    type: LoadBalancer
    
  • Deploy the relevant RBAC roles and role bindings as required by the AWS ALB Ingress controller: kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml

  • Create an IAM policy named ALBIngressControllerIAMPolicy to allow the ALB Ingress controller to make AWS API calls on your behalf: aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json

  • Create a Kubernetes service account and an IAM role (for the pod running the AWS ALB Ingress controller):

    eksctl create iamserviceaccount \
       --cluster=sample-cluster \
       --namespace=kube-system \
       --name=alb-ingress-controller \
       --attach-policy-arn=$ALBIngressControllerIAMPolicy.Arn \
       --override-existing-serviceaccounts \
       --approve
    
  • Install the AWS Load Balancer Controller add-on: curl -sS "https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/alb-ingress-controller.yaml" | sed "s/# - --cluster-name=devCluster/- --cluster-name=attractive-gopher/g" | kubectl apply -f -

  • Create Ingress (ingress.yaml)

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: sample-ingress
    namespace: apps1
    annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/load-balancer-name: apps1-ingress
    # alb.ingress.kubernetes.io/healthcheck-path: /health
    spec:
    rules:
    - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: flask-nginx-service
            port:
                name: http
    
  • Deploy ingress in order to create an Application Load Balancer: run the kubectl apply -f ingress.yaml command.
    eks

  • Test and verify the application: Get the load balancer's DNS name from the ADDRESS field, run the kubectl get ingress.networking.k8s.io/sample-ingress command. Now, run the curl -v <DNS address from previous command> command.

Lets Build & Deploy apps on ECS

  • Create ECS cluster from AWS Console after putting details:

    1. Go to AWS ECS Console.
    2. Click the create cluster button.
    3. Select EC2 Linux + Networking then proceed to the next step.
    4. On the next page, insert the cluster name. I called mine test.
    5. Set Provisioning Model as On-Demand Instance. For EC2 Instance type, select t3a.micro.
    6. Under networking, set the VPC to the default VPC.
    7. Set the Subnets to the first subnet in the dropdown.
    8. Set the Auto-assign public IP to Enabled.
  • Create Task Definition Json

{
  "family": "sample-app-task",
  "containerDefinitions": [
    {
      "name": "sample-app",
      "image": "your-image-repository/sample-app:latest",
      "memory": 512,
      "cpu": 256,
      "essential": true,
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This JSON defines a task definition for ECS. It specifies a single container named sample-app based on the image your-image-repository/sample-app:latest. The container is configured to use 512 MiB of memory and 256 units of CPU. It exposes port 80 for incoming traffic.

  • Use AWS CLI create-task-definition command to create a task definition:
    aws ecs register-task-definition --cli-input-json file://your-task-definition.json

  • Create Service Definition Json

{
  "serviceName": "sample-app-service",
  "taskDefinition": "sample-app-task",
  "desiredCount": 3,
  "launchType": "EC2",
  "loadBalancers": [
    {
      "targetGroupArn": "arn:aws:elasticloadbalancing:region:account-id:targetgroup/sample-app-target-group/unique-id",
      "containerName": "sample-app",
      "containerPort": 80
    }
  ],
  "role": "ecsServiceRole",
  "deploymentConfiguration": {
    "maximumPercent": 200,
    "minimumHealthyPercent": 50
  }
}
Enter fullscreen mode Exit fullscreen mode

This JSON defines a service that manages the desired number of tasks based on the sample-app-task definition.
It requests three tasks (desiredCount: 3) and uses the EC2 launch type. It associates the tasks with a load balancer (replace arn:aws:elasticloadbalancing:region:account-id:targetgroup/sample-app-target-group/unique-id with the actual Target Group ARN). The service uses the ecsServiceRole for permissions.
ecs

  • Use AWS CLI create-service command to create a service: aws ecs create-service --cli-input-json file://your-service-definition.json

Special Mention

AppRunner (A PaaS offering)

App Runner is a Platform as a Service (PaaS) offering for container workloads. You provide a container image bundling a web application, and App Runner takes care of everything else.

You pay for memory but not CPU resources during times when a running container does not process any requests. Let’s look at pricing with an example. Imagine a web application with minimal resource requirements only used from 9 a.m. to 5 p.m., which is eight hours per day. The minimal configuration on App Runner is 1 vCPU and 2 GB memory:

Active hours (= hours in which requests are processed)

1 vCPU: $0.064 * 8 * 30 = $15.36 per month
2 GB memory: 2 * $0.007 * 8 * 30 = $3.36 per month
Inactive hours (= hours in which no requests are processed)

2 GB memory: 2 * $0.007 * 16 * 30 = $6.72 per month
In total, that’s $25.44 per month for the smallest configuration supported by App Runner.

Never forget to delete your App Runner service to avoid unexpected costs. See “AWS App Runner Pricing” at https://aws.amazon.com/apprunner/pricing/ for more details.

But there are few important things to consider before using App Runner:

  • App Runner does not come with an SLA yet.
  • Also, comparing costs between the different options is not easy, because different dimensions are used for billing. Roughly speaking, App Runner should be cheap for small workloads with few requests but rather expensive for large workloads with many requests.

App2Container (A2C)

AWS also offers the App2Container (A2C) service, which is a command-line tool that helps migrate and modernize Java and .NET web applications into container format. A2C analyzes and inventories all applications running on physical machines, Amazon EC2 instances, or in the cloud; packages application artifacts and dependencies into container images; configures network ports; and generates the necessary deployment artifacts for Amazon ECS and Amazon EKS.

Amazon ECS is usually a good option for simple use cases. Amazon ECS provides a quick solution that can host containers at scale. For simple workloads, you can be up and running quickly with Amazon ECS. Amazon EKS is a favorable choice due to its active ecosystem and community, uniform open-source APIs, and extensive adaptability. It is capable of managing more intricate use cases and demands, but it may require more effort to learn and utilize effectively.

If you are currently running OpenShift applications on-premises and intend to migrate your application to the cloud, ROSA provides fully managed Red Hat OpenShift

In conclusion, the choice between ECS and EKS and ROSA is not a one-size-fits-all decision. It's about aligning the capabilities of the container service with the unique demands of your modern architecture. With either ECS or EKS in your toolkit, you're poised to build and scale modern, cloud-native applications that meet the demands of today's digital landscape.

Top comments (0)