How NOT to start with Kubernetes
Don’t use :latest, set resource limits, train the people — DevOps Engineer Christian Heckelmann shares his experience with K8s
The advent of microservices and the cloud in the tech space has opened up a wide range of possibilities in the domain of software engineering that before would have been unthinkable. With many companies reinventing their monoliths into lighter, faster services, it’s inevitable that we needed to also find a way to reinvent the way we deploy services. Going from one deployment a year or month, to multiple deployments a day is something that requires a particular set of technologies; and that’s where Kubernetes comes in. But that’s also where many begin to fail. Especially if you only use it because it’s the shiny new thing.
In one of our latest Pure Performance podcast episodes, aptly called “How not to start with Kubernetes”, Brian Wilson and I spoke with Christian Heckelmann about his experiences and lessons learned on his journey with Kubernetes.
Here are the key takeaways from our discussion of things you should avoid doing when you start with Kubernetes.
In summary, do NOT…
- Start without knowing the basics
- Leave it to developers without hiring a professional or consultant
- Just dive into it because planning is a waste of time
- Install a cluster from scratch without using tools
- Build the container straight away in the cluster
- Deploy the monolith
- Let developers use the configuration they want
- Forget about documentation
- Forget about security
- Use Kubernetes cos it’s fun and new
Let’s have a deeper look into each one of those topics and learn from Christian’s experience.
Start without knowing the basics
This may sound obvious, but you need to have a basic knowledge of Kubernetes before you get started. We recommend learning about the following concepts:
- Kubernetes objects, e.g., Ingress, service deployment, operators
- Kubectl — because it’s much faster to do operations by using the command line instead of the UI
- How traffic is flowing in the cluster
- How to reach other services within your cluster
- How to monitor deployments
Leave it to developers without hiring a professional or consultant
Just knowing the basic concepts is, however, rarely enough to get really started with Kubernetes.
If you do not have anybody within the company with extensive knowledge on the subject, we recommend hiring external consulting or sending your employees to get training.
We have all made a lot of mistakes just because we didn’t know better and there were no better solutions on the internet. And this is a big issue, especially if you are running Kubernetes on-prem.
Just dive into it, planning is a waste of time
Before you start using Kubernetes, think about how you want to deploy your apps, then starting from this, define which tools you may need.
Here are some tips on things to consider:
- How many clusters do you need to provision? If it’s a lot, then you should think about automating as much as possible.
- Do you want to isolate namespace traffic or not? Think about whether you want to use application-based namespaces or environment namespaces. I.e., if you are using feature branches, you may want to use application-based namespaces.
- Plan your storage provisioners in collaboration with the storage team. — Are you running Kubernetes on-prem? Think about what kind of workload you expect and keep it simple, if possible. — Are you running it in the cloud? Use provisioners like Rook, or Longhorn from Rancher that are cloud native.
- Do you want to utilize network policies? Choose the right CNI plugins for your specific case, e.g: Flannel, Calico or Weave.
- Could there be issues with internal network overlapping? Choose the right Subnet for POD and Services Network because it’s not possible to change it afterwards.
Install a cluster from scratch without using tools
This one is related to the previous tip about planning, but it’s important to mention this again. While you plan your Kubernetes cluster, make sure to do your research of which tools are available to make your life easier. Avoid starting from scratch!
Christian is now fully aware that had he known about the CNCF landscape from the beginning, it would have been easier to know what tools exist and not only go via trial and error.
When you want to deploy Kubernetes there are great provisioners you can use to start with, which give you the ability to set up a stable cluster without having deep knowledge — like Rancher. This open-source tool is very well documented, so you can easily find information on how to configure Ingres and load balancing on the cluster.
Try to look for tools that are within the Kubernetes ecosystem, do not rely on “unknown” GitHub repositories with random scripts in them.
Here are some recommended tools to get you started:
We also recommend using tools for specific automations, like certificate management. Do not try to install certificates by your own or add them manually to the cluster, because it’s a big challenge to maintain them. For this we recommend using cert-manager, because it has a very convenient way to manage certificates or to provision them to your services. The same goes for load balancers. Cloud providers have a default load balancer, and it can be provisioned upon request. For on-prem, you can use tools like MetalLB in combination with an Ingress controller like Nginx or Traefik.
Test your containers straight in the cluster
It’s important to test first if your container is working on your local machine before you move to the cloud. Some developers just use the CI/CD tools to build containers and throw them in the cluster to develop directly there, and then encounter issues in the deployment. More often than not, containers are not even starting due to, e.g: changed base image.
Install Docker on your local machine, build and test the image first locally. Then move on to the cluster, when you are sure that everything works correctly.
Deploy the monolith
There’s a reason why cloud architecture and microservices go hand in hand — and Kubernetes is not built for ancient monolithic architecture that relies on local servers.
Using Kubernetes is not about building the cluster and then shifting everything from the monolith into a container and shipping it. It won’t make you happy with working with a cluster.
However, it depends on the individual case. If you only have one application and you do not have a lot of dependencies in your environment, then setting up a cluster is easy. But as soon as you have to connect all the services with your legacy application, you need to think about how to do routing, how to size your PVCs and more. It suddenly becomes much more work than you’d expect.
Let developers use the configuration they want
It is a truth universally acknowledged that 90% of Kubernetes deployments are the same, so it makes a lot of sense to provide templates which can be adopted easily by your developers. This also helps prevent common mistakes like forgetting health checks and setting resource limits. Things you may not know if you do not have a basic knowledge of Kubernetes.
Christian recommends creating a reusable template that covers most of your company’s deployments and prevents errors from happening, like accidentally exposing services to the internet, or setting resource limits on name spaces. Documenting specific settings used for your deployments is the first step to creating a universal template that is valid for all your developers. Then you can use engines like Helm and Kustomize to create those templates.
There also needs to be some kind of governance on the side of a software architect. It’s not particularly an issue when it comes to the developer’s own Kubernetes cluster, where they can play around and do what they want, but once the service is promoted to an official integration environment, it’s important to have guardrails and somebody to look at what developers are doing and how the services are working.
One further aspect to consider is creating a library of base images for the tools you are using, to ensure full control (even from a security perspective). You would not let your employees install any software they’d like on their laptop, so why would you let them do it on their cluster?
Forget about documentation
It’s always a good idea to document information about your Kubernetes set up for it to be readily available for the next person who will do it. And especially with the complexity of K8s, it is something that should not be skipped.
Information like which tooling is needed, which steps have to be done, which template to use, how to solve common issues, are all critical to ensure a uniform and error-proof set up for your developers.
Forget about security
Another ways to prevent security issues is by not giving everybody admin access. Christian had the experience of developers deleting whole environment namespaces by accident because they all had admin access and they could just do it.
Another classical error that affects security is using the “:latest” tag on docker images. Christian describes two specific cases. The first one resulted in the storage provisioner to stop working because he was using Heketi with the latest tag. Typically creating PVCs is easy — a classical copy-paste task. But in this case the :latest tag in combination with an image pull policy of “Always” resulted in failing PVCs.
The second use case was related to dockerfiles that were used by developers to build their images. Those files referenced alpine:latest. At a time when alpine:latest was updated on Dockerhub it caused all developer builds to stop working due to a an issue with that latest version. Of course — Kubernetes was the first everyone blamed even though the problem was related to pulling an image that has the chance of constant change.
To avoid this, the best thing is to use the specific version of the docker image. DO NOT use “:latest”, neither in your CI/CD deployment, nor in Kubernetes.
Use Kubernetes cos it’s fun and new
Let’s first dispel a myth: Kubernetes is not fun.
Even though it is opening a whole new world, there is still a lot of overhead — but it’s just a different type of overhead. Instead of the (g)olden days, where you had to invest time in setting up servers and networks, now you need to invest time automating most of the deployment, leaving you with more time to do training and write documentation.
It’s not by default the right choice to use Kubernetes, it depends on the case. Maybe serverless is better, maybe ECS is better, or even an old-fashioned VM. Why would you want to deploy a static website on Kubernetes, for example? And it’s also not a great idea to use K8s to deploy a service and bypass “bureaucratic” change requests or avoid doing the proper configuration.
Whoever is writing the service should reflect first whether it makes sense to use Kubernetes or if something else would make more sense.
To sum it all up
Using Kubernetes has both advantages and drawbacks, but we hope that sharing our experience can help people avoid headaches when starting with K8s. Mistakes will be made, but hopefully we helped you avoid the biggest traps!
I want to thank again Christian Heckelmann for coming onto our podcast to talk about Kubernetes and for sharing his valuable input on the matter. We hope we can continue our collaboration for many years to come!
If you’d like to listen to the full podcast, here is the link to it: https://www.spreaker.com/user/pureperformance/how-not-to-start-with-kubernetes-lessons or browse other podcasts on Digital Performance: https://www.spreaker.com/show/pureperformance
And keep up to date on future episodes by following our Twitter account.