DEV Community

Cover image for Container Orchestration in 2019
Matthew Chigira for Scout APM

Posted on • Edited on • Originally published at scoutapm.com

Container Orchestration in 2019

This post originally appeared on the Scout blog.

How are you deploying your applications in 2019? Are you using containers yet? According to recent research over 80% of you are. If you are within this group, were you initially sold on the idea of containers but found that in reality, the complexity involved with this approach makes it a difficult trade-off to justify? The community is aware of this and has come up with a remedy to ease the pain, and it’s called container orchestration. So whether you are using containers or not, let’s take a closer look at container orchestration and find out what you need, what its used for and who should be using it.


©Portworx 2018 Container Adoption Survey

What is Container Orchestration?

These days the container world is dominated by Docker. In fact, most people use the terms Docker and container synonymously even though containers have been around a lot longer than Docker itself! We covered the basics of what Docker is in a previous blog post, and showed you how to get your Rails apps up and running with Docker in another post.

Container orchestration, then, is the process of managing the complete lifecycle of containers in an automated fashion. We’re talking about deployment, provisioning of hosts, scaling, health monitoring, resource sharing and load balancing etc. All these individual tasks associated with managing containers pile up and up as your projects grows and before you know it, you’ve got quite a complex problem. Perhaps your applications is made up of many different interconnected components which each live inside a container. When you have hundreds of containers and multiple nodes, how do you safely deploy and keep everything working nicely together? That is the challenge that container orchestration solutions are trying to solve.

The orchestrator sits in the middle of this complex environment and is responsible for dynamically assigning work to the nodes it has available. It automates and streamlines many essential tasks associated with containers so that developers get a seamless experience.

The real beauty of container orchestration is that by using containers you can take care of your application’s concerns in a flexible way that is specific to your project’s needs, but then you can host them in any environment you want, such as Google Cloud Platform, Amazon Web Services, Microsoft Azure etc. You don’t have to feel constrained by an all-in-one PaaS solution which doesn’t fit your companies individual needs. You are essentially creating your own custom platform and then pushing it out to a hosting solution. Pretty interesting, right? So let’s take a look at the most popular container orchestration tools around today and see what they can do for you.

Kubernetes vs. Docker Swarm vs. Apache Mesos

The three major container orchestration solutions in 2019 are: the popular open-source Kubernetes platform from Google, Docker’s own system called Docker Swarm, and Mesos by Apache.

Which one is the best and what is the difference? Well, the short answer is that Kubernetes is probably the one to go with for most scenarios, and it’s popularity does not seem to be halting any time soon. Docker Swarm, on the other hand, is the easiest to get started with. Whereas Apache Mesos offers the most flexibility and is a good choice if you have a complex mix of legacy systems and containers.

Kubernetes

Kubernetes is by far the most popular of these three technologies, due to it’s fantastic feature set, it’s backing from Google and the fact that it is open-source. But with great power comes great complexity, which means that the learning curve is steep for newcomers.

Kubernetes builds on top of the Docker ecosystem in a seamless way, so that developers can maintain their Dockerfiles in source control and leave the kubernetes logistics to the DevOps team to handle. Let’s take a look at some of the terminology that kubernetes uses which I think will give you a good sense of the design philosophy.

A node is a physical server or a virtual machine which your kubernetes system can use. A cluster then, is a collection of these nodes which make up your entire system; these nodes are the resources that kubernetes will automatically manage as it sees fit.

One or more Docker containers can be grouped into what is known as a pod, which is managed collectively, in other words, these are started and stopped together as if they are one application.

Then finally we have services, which are conceptually different from pods and nodes (which can go up and down). Instead a service represents something customer-facing which should always be running, regardless of which resource kubernetes is physically providing.

Kubernetes is going from strength-to-strength in the container world and is clearly the most popular container orchestration tool out there. It’s tight integration with popular cloud services and its high user base make it a solid choice and investment.

Docker Swarm

If the complexity of Kubernetes puts you off, or you feel like you won’t leverage all of its features, then maybe Docker Swarm would be more suitable for your project. It’s conceptually a lot simpler. Docker Swarm is a separate product from the Docker team that builds on top of Docker and provides container orchestration.

There are some different terms used in Docker Swarm which have similar meanings to Kubernetes terms. For example, the alternative term for Kubernetes’ cluster is a swarm. This is a collection of nodes, which can be broken down in manager nodes and worker nodes in Docker Swarm. You can think of services like individual components of your application and tasks as the Docker containers that provide a solution to a given service.

It is unclear what the future holds for Docker Swarm, as Docker seems to have fully embraced Kubernetes, but this could still be a viable option for you if you don’t require the full feature set of Kubernetes and you are looking to get up and running quicker.

Apache Mesos

The final technology that I want to talk about is Apache Mesos, which is a little bit of a different beast from the other two solutions I’ve presented. It’s been around longer and it wasn’t created strictly to manage Docker containers like Kubernetes and Docker Swarm were. Instead Mesos can be described more generally as a cluster management tool which manages a cluster of physical (or virtual) nodes and assigns workloads to them. But these underlying workloads that it assigns to nodes can be anything. They are not just limited to Docker containers. Containers are just one example of a workload that Mesos can manage. In fact, to use Docker containers with Apache Mesos, you actually need to use an add-on called Marathon, you don’t get this feature out of the box.

Mesos is definitely complex but it has some great advantages. It’s much more suitable if you are not completely tied into a containerized environment and perhaps have legacy systems and integrated services that you want to manage together with containers.

Top comments (0)