According to Betteridge’s law of Headlines, the answer to the above headline would be a resounding ‘no’. However, in this particular instance, I believe the answer could most certainly be ‘yes’.
Kubernetes, for those who don’t know what it is, is a container orchestration system that helps to abstract away the underlying platform and essentially allow developers to focus more on the business logic and less on things such as autoscaling and load balancing.
Whilst it has a fairly considerable learning curve for developers to get up and running with it, it can tremendously improve the way that we deploy and manage our enterprise-grade software systems.
Why Does This Matter?
If you’ve worked in an enterprise environment that works with confidential data, you will most likely have seen how apprehensive people are to deploy their important applications to the likes of AWS, Azure, or GCP.
This is completely understandable. People are apprehensive about using these external services in the chance that these external systems are compromised and valuable business data is exposed. These enterprises tend to rely far more on internal data centers that lie within their own networks. This gives them a better peace of mind when it comes to handling information such as credit card details and personally identifiable information.
The Issues with Traditional Infrastructure
Traditionally, developers in large enterprises would request dedicated servers from the massive fleet of servers that are hosted within their internal data centers. They would then manage, maintain and deploy their applications on these dedicated servers.
This has a few major disadvantages:
- Time has to be spent on maintaining and patching these servers and ensuring that any updates don’t bring important sections of your application down.
- More often than not, there is a large lead time between a team requesting a dedicated server, and actually deploying their newly developed application onto these servers.
- Teams either over-estimate the number of dedicated servers they need or under-estimate the number. This often leads to servers sitting idle or applications unable to cope with massive spikes in usage and falling over. Either way, this ends up costing both time and money.
- Teams tend to pick up bad habbits such as hard coding IP addresses and hostnames in their applications code-bases when you are pinned to a particular set of servers.
How Can Kubernetes Solve These Issues?
The advantages of Kubernetes are that it abstracts away the underlying servers. You can treat your fleet of servers as an amalgamation of compute power and focus on the business logic within your applications.
Google has been successfully leveraging kubernetes for a number of years now with great success. According to the Kubernetes homepage, they are now up to the point where kubernetes is running billions of containers per week without having a to rely on a massive operations team managing it all.
So, how can Kubernetes help improve the way that large scale enterprises work?
Say you had an application sitting on two dedicated servers in two distinct regions in production. If, for any reason, one of these servers was to go down, you would be left somewhat exposed and vulnerable to a complete outage which could be bad for the business.
At the point of one of your servers going down, which would undoubtedly be at around 3 o’clock in the morning, one of your developers would typically receive a text or a phone call informing them that there has been an issue and that they would have to address said issue quickly. Upon logging in, they would have to fail over to another server and follow any runbooks they have defined in order to get the service back up to a nominal state.
We could potentially eradicate this type of scenario from our workflow though with the help of K8s. By leveraging things such as autoscaling groups and health/readiness checks within your Kubernetes based application. Kubernetes would constantly monitor these two endpoints and should it notice say your health endpoint failing, it will automatically spin up a new instance of your application on any of the available nodes within your Kube cluster.
Easier Deployments and Rollbacks
When it comes to deployments, K8s helps to simplify things and can allow you to easily roll out new versions of your systems. You can choose from a number of different deployment patterns such as rolling updates, blue-green, or canary deployments with just a few lines of configuration in a yaml file.
This can help ensure that those “squeaky-bum” deployments at the weekend are less stressful for all parties involved. If you do happen to release something that isn’t quite up to spec just yet, K8s can once again come in to save the day and rollback to a previously working version.
Enforces Good Development Practices
Kubernetes is agnostic to whatever containerization technology you use. For the most part, people couple the orchestration management system with the likes of Docker as that seems to be the technology that is currently in vogue.
In order to leverage kubernetes, this means that applications developers will have to develop using the likes of Docker which can in turn help to ensure developers are following good development practices such as the 12 factors: http://12factor.net/ These are generally good practices that can are worth following even if you don’t plan on leveraging cloud platforms.
If you aren’t familiar with Docker then feel free to check out my post on HackerNoon:
An Introduction to Docker Through Story
My previous few story-based articles seem to have really hit it off and I’ve had some very good feedback from you all…
Taints and Tolerations
Large enterprises may have a multitude of different types of servers lying within their data centers. There may be a servers resting in demilitarized zones or fleets of high-powered GPU servers typically used for intensive compute operations.
Now, you wouldn’t typically want loads of light-weight frontend applications filling up these GPU machines, you would want them deployed on servers that are more suited for their needs.
This is where the likes of taints and tolerations in Kubernetes comes into play. You can taint node (individual servers) within your K8s cluster so that pods with the associated toleration will be more likely to be deployed on these tainted nodes. You can read up more on this here: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-nodes-by-condition
Kubernetes has already proven to be a massively powerful tool, if you browse the official website, you’ll see scores of case studies that show exactly how companies and projects have utilized K8s to great success. One of the most powerful examples I’ve seen is for Pokemon Go.
Essentially, when they released, they were expecting a significant amount of users, as you would expect for anything Pokemon branded. On launch, they shattered the numbers that they expected and thanks to K8s, they were able to scale to cope with the insane number of users.
I highly recommend checking out the full case study for this here:
Bringing Pokémon GO to life on Google Cloud
Throughout my career as an engineer, I've had a hand in numerous product launches that grew to millions of users. User…
This is just one example and it highlights just how K8s can help large enterprises write applications and services that scale to meet the needs of millions around the world.
The Rise of Monzo!
Monzo has seen a meteoric rise in the number of people using them as a bank. Their intuitive app and excellent customer service seems to be winning over a lot of people away from traditional, slower-moving banks in Britain.
Through the utilization of K8s, Monzo were able to reduce their infrastructure bill to around a quarter of the cost of what it was. This is an incredible cost saving and considering they are a new bank, if older banks were able to copy their successes, they could see infrastructure cost savings in the billions.
They’ve also said that they are able to confidently kill servers in their production environment and remain confident that Kubernetes will reschedule their pods onto other nodes within their clusters.
I highly recommend you check out their engineering blog post on the subject. It’s an incredibly well written insight into how they are fleshing out their backend systems to cope with massive increases in demand:
Building a Modern Bank Backend
At Monzo, we're building a banking system from scratch. Our systems must be available 24x7 no matter what happens…
Kubernetes is an incredibly powerful tool that will help organizations better leverage their massive internal datacenters and can abstract away the complexities of the underlying hardware to allow them to focus on delivering key business value. If used well, it can see enterprises making massive savings on the number of servers they have to maintain to keep their internal applications running.
By leveraging Kubernetes, we can improve the way our applications deal with dynamic workloads and also help to improve the resiliency of such applications.
If you have your own thoughts and feelings on Kubernetes I’d love to hear your them in the comments section below or on twitter: @Elliot_f
If you liked my writing and wish to support me and learn more about developing for the cloud then please feel free to check out my new book, An Introduction to Cloud Development: