I will try to answer most of your questions around DevOps or Kubernetes - Been now working on K8s for almost 2 years and have nothing but love for it.
For further actions, you may consider blocking this person and/or reporting abuse
I will try to answer most of your questions around DevOps or Kubernetes - Been now working on K8s for almost 2 years and have nothing but love for it.
For further actions, you may consider blocking this person and/or reporting abuse
Audu Ephraim -
Lulu -
Aditya Pratap Bhuyan -
Amash Ansari -
Top comments (40)
Hi,
I am learning Kubernetes in my lab currently, and I would be interested in hearing what you think about my "Getting Started with Kubernetes (at home)" series of posts and my related documentation on GitLab.
what is your stance on Helm? I find it a lot easier to manage Helm Charts than Kubernetes manifests directly, and tend to gravitate towards a Helm based deployment for most things if possible. I feel like it is the future of software deployment on Kubernetes.
Thanks!
I’ll take a look at your link you posted but yeah I work 100% with helm that is all tied up to ci/cd. It’s much easier to manage hundreds of apps and deployments. Some chose also kustomize
Hi Joes thanks for this good initiative.
I know that kubernetes has been designed with cluster/scalability in mind.
How do you manage to add more hosts to handle more load on your applications ?
Seems to be trivial, but I would be happy to have a real life feedback about this.
Thanks !
On my phone so don’t expect much than this:
K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .
Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.
Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.
Hope that answers the question if not, I’ll give some additional info when I get on laptop.
so it seems that kubernetes is optimized for public clouds then ?
how did you develop containers in local machine? i think kubernetes is too big platform for writing application code with developing containers.
Minikube :)
Are you looking for to develop a container or container that would work with K8s? Say you integrate it with helm.
If it’s just working with containers then a simple docker is sufficient to work on your local machine . But as Gareth below mentions, minikube is good way to start locally if you want to have that feeling of how would it work on your K8s infrastructure. Also there are lightweight K8s. When I get off the phone I’ll send you a link.
How did I develop?
Well early on when I started I was using minikube because I did not want to mess around with extras around lab K8s but now that it’s all pretty much templated and have multi cluster environments I just use K8s.
Update: Lightweight K8s k3s.io/
thank you so much!
Hi, I'm a sysadmin working with traditional VM clusters. In your opinion, what would be the biggest selling point of moving towards containers and later on on K8s? And also, what would be the biggest drawback?
Thanks!
Nicolo,
It really all depends on your business and applications you are working on. Now for sysadmins for an example instead of having say some vm with full blown OS like Ubuntu 18.04 and that only does some minimal jobs like cron jobs or parsing data, such things can be converted into a container that are easier to manage than patching, upgrading installing diff tools on Ubuntu + working on an applications that are tied to that VM. So for that alone I think its a good selling point because you are taking away the OS layer and do only what your app should be doing. But then there is something like lambda functions where it gets even better , you run just a code vs a container.
The biggest drawback I think its the stateless applications and databases, such things are OK to have on k8s cluster and even containerize it, however the container and any orchestration infrastructure is not fully there yet and if it is, it can be unstable and not worth it. So if you are looking for a full fledged speeds and latency minimization a VM Cluster or on-prem systems is better because you are minimizing the hops in network..
Another one is the troubleshooting, when shit goes down, and if you are not experienced you could be looking into 10-15 different things what went wrong. On vm cluster you look at only 4-5 things of failures. OS/Storage/Network/Application
Thanks Joe! I was exploring the container/Docker world in the last few weeks, and I'm glad to see that your opinion sort of resonates with the one I was forming.
I'm already looking at specific cases where containers would be more efficient than the usual VM approach, like small python apps and such. But, and that's a big but, the majority of what I have to deal with are database clusters, and the ephemeral nature of containers had me thinking that would be really hard to containerise DB clusters, especially ones with sharding and replicas (take this with a grain of salt because I'm a Docker noob).
Databases in containers is a young fish. Maybe in few years where stability and speed comes up to the par but nothing will beat latency and transactions like a self/cloud hosted server. While I do run some apps and have contsinerized databases like MongoDB etc, been burned few times by simply “don’t touch it” lol.
Yeah if you can offload some of hosts/apps with containers that’s great start.
Resources for learning K8?
There is tons of resources out there, while me lately I just go onto kubernetes.io to get some specs and resources for someone that just starts that site is bit overwhelming.
But if you want all of the above, this google sheet contains everything you need to know
kubernetes resources google sheet
How long do it takes to get the nuts and bolts of K8s and be cozy with it ?
Depending on how much time you devote to it and what types of things you want to do. I can tell you that the more you go deeper into it the more complex it gets.
I felt comfortable within about 5-6 months, but even now close to 2y working with it, there are just things that you still find out to be more useful than what you are accustomed to.
In general I would say 2-3hrs a day * 3-4 months = you get to the point that you could create a k8s cluster with no problem and install containers etc. But to get to the point where you work with multi cluster , horizontal/vertical scaling, network policies, etc etc.. that just takes out more time.
Thanks. It gives me a good idea.
I've been struggling with deciding to move to k8s the past few months. We have a team with +/- 6 people on the full product team, with basically me and two other developers doing most of the operations.
My primary reason to move would be to easily sunset/cordon, upgrade and upgrade/remove servers. However, I find the whole management of K8s just take too much mental energy to "get right" with authentication, authorization, namespaces, 100 YML files, secrets, configsets, etc. I'm all a bit overwhelmed.
Maybe my question would be: is K8s resonable for such a small team, and if so: how do you suggest we organize our cluster without needing to hire a dedicated sysadmin?
Well my team started as about 4-5 ppl now we are at 8-9 that we all manage different teams and onboard them daily from different platform to K8s. Now I can tell you it is not an easy start that’s for sure at the very beginning to get it going seem to be easy however once you start with automation and different hocus pocus things it gets heavy. Our team is very diverse in K8s knowledge all of us have best strength in some parts of K8s but nobody I would say is a K8s guru because let’s face it is tough to be “that” person.
So I think with the right set of mind like start sketching out infrastructure for dev or lab env. Say 1 master small worker nodes. Then hook it up to ci/cd and see if you can get some app to run via helm or kustomize. Once you get the hang of it, dig deeper into autoscaling vertically/horizontally. Security tightened with something like kali , canary deploys etc can be done right with istio.
I mean I could go on and on, but everyone seem to do stuff differently because K8s is flexible for any team. As long as it’s not like you and 8devs that want you to finish it by Friday.
What do you think about the release pace of k8s in relation to cloud vendors' support and specifically Azure, is it kinda fast-paced? Should we be worried that our current version of k8s is phased out too soon?
Yes and no. Assuming you are talking about managed K8s by Cloud Providers vs hosting and managing it yourself?
I think release pace is actually good because you know that things are being fixed and new features are being added. That said in personal experience being on release 1.9.x vs 1.13.x is not good because there is a large overlap in upgrades and most importantly is the security. As far as cloud vendors being one or two releases back is somewhat painful but at least you know that releases they put out are stable and work with their infrastructure. But say you have self managed K8s, to test all of the things with your app and infrastructure could take you month or two especial If you are working with many layers of networking and storage infrastructure.
Hope that helps. Feel free to ask more if this was not enough explained as I can go in greater details too :)
I'm a fullstack dev and always enjoyed working with servers and just started learning devOps, as devOps yourself what path would you recommend and some general best practices that can help in the long run.
Thanks ❤️
I think this post I wrote sums it pretty much up.
DevOps RoadMap
Joe Hobot
Can you help me clear this doubts of mine?
Suppose I have a container multiple containers with mysql too. when I am in single node than I can easily create multiple mysql container using same container volume. But when we want same thing to work with multiple cluster. Now what is the best way to maintain the database container volume across all the node.
What do we do in such scenario? Thanks!
Which cloud provider do you use? Because if you use like stateless the volume should be accessible by all nodes so container it self does not matter where it is unless you go by labels.
Say in my example for elk stack , the ebs volume is created and attached to container as volume, if container for es goes down and goes onto another host the volume is still be available as it’s a shared ebs volume.