DEV Community

Michael Levan
Michael Levan

Posted on

Optimize the Kubernetes Dev Experience By Creating Silos

Are silos bad?

Is abstraction bad?

What’s too much and too little?

These are questions that engineers and leadership teams alike have been asking themselves for years. For better or for worse, there’s no absolute answer.

In this blog post, we’ll get as close to an answer as possible.

Silos And Less Abstractions Are A Good Thing

Tech has reached a point where there are two things completely blown out of proportion:

  1. Abstraction
  2. Silos

Abstraction has turned from “remove remedial tasks” to “magically take away all the work that’s needed. Silos went from “certain people are experts in their own realm” to “break it all down and let everyone work on everything”. This has been going strong since about 2015-2016, and we’re still seeing how this level of thinking can be detrimental to organizations, engineers, teams, and everyone else involved.

Instead, we shouldn’t be thinking about how to abstract layers from developers to make their life easier or bring them into the fold. We should be giving developers and other engineers (IT, cyber, QA, DevOps, etc) easier methods to interact with systems/platforms/infrastructure that already exists.

In the cloud-native realm, there have been a lot of organizations and vendors that have tried “selling abstraction”, but it never truly works as intended or to its full extent because regardless of the abstraction, there always needs to be an “expert” that understand what’s happening underneath the hood.

The same rules apply to silos. There has always been talk of removing the silos and although it works great in theory, it doesn’t work great in practice. Silos aren’t a bad thing. We need experts in every area. We need the engineer to call when things are bonkers and we can’t figure it out. Don’t confuse this with having one person who knows everything and everyone else doesn’t know anything on the team or throwing work over the fence because people don’t want to deal with it. That’s not a silo. A silo is simply a set of engineers that are experts in their particular realm, and that’s a good thing.

Not Everyone Needs To Be Kubernetes Experts

The majority of engineers are constantly told that there’s some tool, software, or managed service that makes Kubernetes easier. For every engineer who hears something along those lines, there’s an engineer that’s just learning Kubernetes.

This is not the “Kubernetes is hard or not” debate. Every single thing in technology is hard until you know it. Once you know it, it doesn’t seem as hard. The thing is that Kubernetes as a platform is incredibly large and therefore not everyone has the time or opportunity to learn it all, which is why we see a lot of the “this thing makes Kubernetes easier”.

Much like what was discussed in the previous section, silos and abstractions should only exist in a certain capacity.

Not every engineer needs to be an expert in Kubernetes, but that’s not because of one tool or one vendor or one solution. It’s a combination of available experts, just enough silos, and abstraction that makes sense.

In the next three sections, you’ll learn about three methods that can help you

Method Number 1 (The Tool Solution): ArgoCD

  1. First, add the Helm repo for ArgoCD.
helm repo add argo https://argoproj.github.io/argo-helm
Enter fullscreen mode Exit fullscreen mode
  1. Next, deploy ArgoCD with Helm.

If you’re not running three (3) or more Worker Nodes, run the below (this is non-HA):

helm install argocd -n argocd argo/argo-cd --create-namespace

Enter fullscreen mode Exit fullscreen mode

If you’re running three (3) or more Worker Nodes, run the below (this is HA):

helm install argocd -n argocd argo/argo-cd \
--set redis-ha.enabled=true \
--set controller.replicas=1 \
--set server.autoscaling.enabled=true \
--set server.autoscaling.minReplicas=2 \
--set repoServer.autoscaling.enabled=true \
--set repoServer.autoscaling.minReplicas=2 \
--set applicationSet.replicaCount=2 \
--set server.service.type=LoadBalancer \
--create-namespace
Enter fullscreen mode Exit fullscreen mode
  1. Get the ArgoCD password (you should change this in production).
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Enter fullscreen mode Exit fullscreen mode
  1. Access the dashboard.
kubectl port-forward -n argocd service/argocd-server 8080:80
Enter fullscreen mode Exit fullscreen mode

You can now give developers/engineers access to the ArgoCD UI, which can almost act as an IDP in the sense of deploying application stacks, syncing them, and seeing the status of whether they’re running or not.

Image description

This is the “tool” method, but of course, you cannot throw tools at engineers and expect problems to go away. You’ll also see an “easy-ish-to-manage” platform.

Method Number 2 (The Abstraction Solution): Serverless Orchestration

There are a lot of different “Serverless Orchestration” methods right now. Let’s split them up by major cloud.

AWS

  1. Elastic Container Service
  2. EKS with Fargate

Azure

  1. Azure Container Instances
  2. Azure Container Apps

GCP

  1. Google Cloud Run (runs on top of Borg).
  2. GKE Autopilot

Because there are so many, there’s a lot to choose from, but some do differ from others. Serverless Orchestration (or Serverless Kubernetes) allows you to take a step further from Managed Kubernetes Services. Managed k8s allows engineers to remove the need to manage Control Planes aside from backups, Etcd encryption, and updates to the Kubernetes version. You still, however, have to manage Worker Nodes. With Serverless Orchestration/Kubernetes, the Worker Nodes are managed for you. It’s a true “Serverless” experience in the sense that there isn’t any infrastructure for you to manage.

The only downside right now is that there are some third-party tools/addons that don’t work on it. For example, Istio does not work on EKS Fargate, you can only use the internal AWS Service Mesh solution. Istio does work on GKE Autopilot though. As of right now, you’re sort of “locked” into the cloud provider in terms of third-party tools and addons, but hopefully, that’ll change at some point.

💡 Always test what you want to deploy before assuming it’ll work on Serverless Orchestration/Kubernetes. I tested a Kubeflow installation on GKE Autopilot and it didn’t work. For whatever reason, it looked like the Worker Nodes couldn’t scale up fast enough or that I wasn’t allowed to use the resources, I’m not entirely sure. I took the same approach on regular GKE and it worked just fine.

Always test.

Method Number 3 (The Team Topology Solution): Platform Engineering

The third and arguably most important method is Platform Engineering. Luckily, with this step, methods 1 and 2 will work with it.

Platform Engineering as three primary goals:

  1. Engineer a great product.
  2. Think about what you’re engineering with a product mindset.
  3. Engineer a platform that the developers/engineers want to use, not have to use.

Number 3 is the most important. You can engineer a platform, make it work great, and put a lot of effort into it, but if the developers/engineers using it don’t like it, you’ll have to start from scratch.

The idea with Platform Engineering is that the developers/engineers using the platform don’t have to worry about the backend. They don’t have to deploy Kubernetes, manage where it lives, the capabilities (like ArgoCD) available, or anything. They don’t have to be experts, they just have to be users, which is what they need.

The key to a good Platform Engineering environment is creating a proper interface/interaction. This is where developers/engineers will interact with the platform. It could be some type of Internal Developer Platform (IDP), a CLI-based interface, an API, or literally whatever else the developers/engineers want to use. The key is that it’s what the developers/engineers want.

Closing Thoughts

We’ve reached a point in engineers where you either see:

  1. Everyone is supposed to know and do everything.
  2. Tools claim there’s a ton of abstraction, but there’s always a catch.

The main concern is number 1 cannot be expected. Sure, there are engineers that know how to do a lot or a little bit of everything. It can’t be expected though because that’s not the way everyone’s mind works from a psychology perspective. Typically, you’ll see people that are really good at one thing or decent at many things, but you won’t find a ton of people that are really good at everything and that’s totally fine.

Number 2 is simply not a good method of implementation or explanation at this point. Engineers are constantly promised “single pane” and “more abstraction” and “easier”, but there is always a catch and it’s never as easy as anyone says. Engineers must be prepared for that.

Top comments (0)