Migrating from Amazon ECS to EKS is probably the last thing you want to spend your time on.
Yet, you’re here. So, it’s likely that ECS hasn’t been serving you fully. Or perhaps you’re curious if you should start planning the migration and how long will it take.
I’ll try to answer most of your questions, starting from whether you should look at EKS at all to some practical migration and EKS management tips for when you’ve made your decision.
Why migrate from ECS to EKS?
Companies that want to advance in Kubernetes are better off by using EKS. Giants like Amazon, HSBC, JP Morgan Chase, and Delivery Hero all use EKS because of the control and flexibility it offers.
1. Portability
While ECS is AWS-proprietary technology, EKS is basically a Kubernetes-as-a-platform service developed and maintained by AWS. EKS clusters are actually portable, you can recreate a similar experience on local environments, development environments, etc., using vanilla K8s. So, you can probably tell in which scenario you’ll face the risk of cloud vendor lock-in.
If you’re building and running applications in ECS, you might encounter vendor lock-in issues in the long run. If you decide to use another provider, you’ll have to define the entire architecture to match it.
That’s why designing your application to run on EKS leaves you more flexibility. The abstraction layer of EKS helps you to package your containers and move them to another platform quickly. That way, you can run workloads on any other Kubernetes cluster - whether it’s on-prem or the cloud provider offering you the best deal.
On top of that, you can find solutions on the market that allow you to switch between different managed Kubernetes services seamlessly.
Open source and community are two important points related to this.
With EKS, you have lots of tooling built on top of it and the community itself is growing rapidly. And you know what that means - you get plenty of support as many problems already have their solutions.
Open source also allows you to choose your tooling, while in ECS, everything is very opinionated and there’s not much flexibility left at the end of the day.
2. Networking limitations
Amazon ECS allows users to assign an elastic network interface (ENI) to a task using only one networking mode: awsvpc. Usually, you can get only 8-15 network interfaces per EC2 instance but ECS also supports containers with higher limits (as long as you meet specific prerequisites). In total, you can run up to 120 tasks per EC2 instance.
In EKS, you get to enjoy greater flexibility in networking. You can share an ENI between multiple pods and place more pods per instance.
3. Namespaces
Namespaces come in handy because they isolate workloads running in the same Kubernetes cluster. For example, you can have a dev, staging, and production environment in one cluster. They can all share the resources of the cluster.
Trouble is, you can’t use namespaces in ECS. The solution just doesn’t include them as a concept. In contrast to that, EKS allows you to use them just as you would in self-managed Kubernetes.
4. No configuration flexibility
Many people choose ECS because it’s so simple. But there’s a price to pay for this - namely, limited configuration options. For example, you get no access to cluster nodes, which limits your troubleshooting capabilities.
And if you use ECS with Fargate, prepare for even more limitations. For example, you don’t get the option to easily decouple environment-specific config from your container images for portability as you do in EKS.
ECS to EKS glossary
Before migrating from ECS to EKS, you need to become familiar with a few terms that are common to Kubernetes:
- ECS vs. K8s building blocks - this is the best place to get started, reviewing the building blocks of ECS and Kubernetes will help you understand the differences between these two.
- EKS Worker Node - this is the EC2 Instance that is running your workloads (Pods).
- IaC - Infrastructure as Code - these are tools that allow you to define infrastructure in code that you usually commit into Git repositories. In addition to that, you get Git-like diff output whenever there is a mismatch between your code and the infrastructure in the cloud. Examples include Pulumi, Terraform, and AWS CloudFormation
- Helm - the most popular packaging solution in the Kubernetes world.
- ALB - Application Load Balancer.
- NLB - Network Load Balancer.
- Internet Gateway - an internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
- VPC - Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
ECS comes with its own terminology that you’re probably familiar with. Here’s how different ECS concepts translate into the world of EKS.
- ECS Task Definition <> EKS Kubernetes Deployment YAML
- ECS Task <> EKS Kubernetes Pod
- ECS Cluster <> EKS Cluster
If you need more guidance here, some good sources to check out are the ECS Workshop and EKS Workshop.
What you get with EKS
You might not need pods for every workload you’re running. But you can’t deny that they offer unparalleled control over pod placement and resource sharing. This is really valuable when you’re dealing with most service-based architectures.
EKS offers far more flexibility for managing the underlying resources. You can run your clusters on EC2 instances, Fargate, and even on-premises (via EKS Anywhere).
If you’re familiar with Kubernetes and want to get your hands on the flexibility and features it provides, go for EKS.
How to migrate from ECS to EKS
AWS has some good tips on migrating to EKS.
But we found some other things you need to take care of before your ECS to EKS migration.
1. Rewrite ECS Task Definition files to K8s deployment YAMLs
First things first, you need to rewrite your ECS Task Definition files to Kubernetes deployment YAMLs. This part is unavoidable and relates to one of the biggest differences between ECS and EKS (or vanilla Kubernetes).
2. Spin up your environment
You also need to spin up the respective environment versions you have on EKS. People typically choose to use Infrastructure as Code (IaC) for that - using Terraform, CloudFormation, or Pulumi. Good news: the most popular IaC tools support EKS.
Assuming that you’re already familiar with Docker and have application images packaged and available for use:
- You could use the features of Terraform (or an alternative IaC tool), such as kubernetes provider to ship your Deployment YAMLs as part of the IaC flow.
- You could also make use of Helm Provider if your application packages use Helm.
- Alternatively, you could use CloudFormation which allows deploying workloads to EKS clusters as well, assuming your applications are packaged using Helm.
You can get the above working in many more ways - and each has its pros and cons. But this simple solution is enough for now.
3. Configure your CI/CD pipelines
You need to do this to deploy your applications into the EKS cluster.
4. Networking
Both ECS and EKS support similar networking capabilities: ALB & NLB. You can use the basic constructs of networking that you’re currently familiar with in ECS. This article might come in handy if you’re looking for more details about ingress with ALB.
5. Run some tests
Run your test suite against your new configuration to make sure that everything works properly.
6. Switch your traffic to the EKS cluster
This might vary depending on your configuration, but to give you an idea of what needs to be done:
- You could switch the IP your domain points to, for it to point to the Load Balancer used by your EKS cluster. This is how you make sure that your application traffic now points to the EKS cluster.
- For stateful applications, you need to think about other things as well, such as ensuring that the K8s-based application transitions to being the main user of the database (smooth switch from ECS using the database for writing to the EKS app using the database for writing).
Make your EKS journey easier with automation
Configuring and running EKS doesn’t have to be hard. This is what all the managed Kubernetes tools are here for.
For example, CAST AI comes with opinionated Kubernetes implementation that helps you manage all the infrastructure complexities:
- Focus on high-level work that interests you most.
- Creating and managing CAST Al components is easy - you can do it through API and Terraform to automate infrastructure lifecycle management.
- You get to streamline autoscaling with a headroom policy to accommodate sudden spikes in demand.
- And automate spot instance use to cut costs even more. If the spot availability shrinks, your workloads are automatically moved to on-demand instances and never go down.
- All in all, you get to benefit from automated cloud cost optimization - the platform chooses the most cost-effective AWS instance types and sizes, and delivers detailed cost reports.
If you’d like to see how this works in real life, here’s how the e-commerce agency Snow Commerce moved to EKS and now rolls out apps seamlessly with a fully automated environment.
Top comments (2)
Thx for writing this article. This is very helpful! I need to migrate an application from ECS to EKS.
Some notes to ECS. It's correct that you don't have namespaces in ECS, but you can create multiple clusters very easy & fast and it's super cheap. The fact that you have less control over the underlying resources is actually an advantage for me, not a disadvantage. I'm using ECS + Fargate and I love it. I don't care about the underlying resources and I love the simplicity!
Vendor lock-in vs low infrastructure costs is another point. One has to decide which is more important. For me low infrastructure cost was more important that's why I decided for ECS and all other native AWS services.
The first sentence of this article should be the TLDR