A Kubernetes Pod is the basic building block of Kubernetes. Comprising of one or more containers, it is the smallest entity you can break Kubernetes architecture into.
When I was new to Kubernetes, I often wondered why they designed it so. I mean why containers did not become the basic build block instead. Well, a bit of doing things in the real environment and it makes more sense now.
So, Pods can contain multiple containers, for some excellent reasons — primarily, the fact that containers in a pod get scheduled in the same node in a multi-node cluster. …
Kubernetes has been able to revolutionise the cloud-native ecosystem by allowing people to run distributed applications at scale. Though Kubernetes is a feature-rich and robust container orchestration platform, it does come with its own set of complexities. Managing Kubernetes at scale with multiple teams working on it is not easy, and ensuring that people do the right thing and do not cross their line is difficult to manage.
Kyverno is just the right tool for this. It is an open source, Kubernetes-native policy engine that helps you define policies using simple Kubernetes manifests. It can validate, mutate, and generate Kubernetes resources. …
Falco is an open source runtime security tool that can help you to secure a variety of environments. Sysdig created it and it has been a CNCF project since 2018. Falco reads real-time Linux kernel logs, container logs, Kubernetes logs, etc. against a powerful rules engine to alert users of malicious behaviour.
It is particularly useful for container security — especially if you are using Kubernetes to run them — and it is now the de facto Kubernetes threat detection engine. It ingests Kubernetes API audit logs for runtime threat detection and to understand application behaviour.
It also helps teams understand who did what in the cluster, as it can integrate with Webhooks to raise alerts in a ticketing system or a collaboration engine like Slack. …
Deployment resources within Kubernetes have simplified container deployments, and they are one of the most used Kubernetes resources. Deployments manage ReplicaSets, and they help create multiple deployment strategies by appropriately manipulating them to produce the desired effect.
Surprisingly, deployments only have two Strategy types:
RollingUpdate is the default strategy where Kubernetes creates a new ReplicaSet and starts scaling the new ReplicaSet up and simultaneously scaling the old ReplicaSet down, the
Recreate strategy scales the old ReplicaSet to zero and creates a new one with the desired replicas immediately.
That does not limit Kubernetes’ ability, though, for more advanced deployments. There are more fine-grain controls on the deployment specification that can help us implement multiple deployment patterns and strategies. Let’s look at possible scenarios, when to use them, and how they look with hands-on examples. …
Setting up a Kubernetes cluster is getting simpler with time. There are several turnkey solutions available in the market, and no one currently does it the hard way!
Notably, Minikube has been one of the go-to clusters for developers to get started with development and testing their containers quickly. While Minikube currently supports a Multi-node cluster in an experimental phase, it isn’t GA yet.
Therefore, this becomes a limitation for integration and component testing, and most organisations rely on cloud-based managed Kubernetes services for that.
Integrating Kubernetes in the CI/CD pipeline and doing a test requires multiple tools, such as Terraform, a dependency on a cloud provider, and of course a CI/CD tool such as Jenkins, GitLab, or GitHub. …
Recently, Kubernetes has been in vogue and growing at a tremendous pace. With Kubernetes being part of CNCF and the industry taking a more cloud-native approach, Kubernetes engineers are in demand as never before.
The Cloud Native Computing Foundation, in collaboration with the Linux Foundation, has come up with certificate offerings that allow developers, system administrators, and cybersecurity personnel to validate their knowledge on Kubernetes. They have developed these certificates to match industry requirements and to ensure that every developer or system administrator has the knowledge to be called Kubernetes experts.
Unlike other tech certifications, the Kubernetes certifications offered by CNCF are open-book and completely hands-on. You get a Linux command line environment where you solve a number of labs in a limited period of time. They test you on the practical aspects of the technology rather than asking some multiple-choice questions that you can cram, regurgitate, and then completely forget about later. …
Vertical Pod Autoscaling is one of those cool Kubernetes features that are not used enough — and for good reason. Kubernetes was built for horizontal scaling and, at least initially, it didn’t seem a great idea to scale a pod vertically. Instead, it made more sense to create a copy of the Pod if you want to handle the additional load.
However, that required extensive resource optimisation, and if you didn’t tune your Pod appropriately, by providing a proper resource request and limits configuration, you may either end up evicting your pods too often or wasting many useful resources. …
Apache Kafka is one of the most popular event-based distributed streaming platforms. LinkedIn first developed it, and technology leaders such as Uber, Netflix, Slack, Coursera, Spotify, and others currently use it.
Though very powerful, Kafka is equally complex and requires a highly available robust platform to run on. Most of the time, engineers struggle to feed and water the Kafka servers, and standing one up and maintaining it is not a piece of cake.
With microservices in vogue and most companies adopting distributed computing, standing up Kafka as the core messaging backbone has its advantages. …
Containers have come a long way, and Kubernetes isn’t just changing the technology landscape — but also the organisational mindset. With more and more companies moving towards cloud-native technologies, the demand for containers and Kubernetes is ever-increasing.
Kubernetes runs on servers, and servers can either be physical or virtual. With cloud taking a prominent role in the current IT landscape, it’s become much easier to implement near-infinite scaling and to cost optimise your workloads.
Gone are the days when servers were bought in advance, provisioned in racks, and maintained manually. With the cloud, you can spin up and spin down a virtual machine in minutes and pay only for the infrastructure you provision. …
If you’re already using Kubernetes, you’ve probably heard about serverless. While both platforms are scalable, serverless goes the extra mile by providing developers with running code without worrying about infrastructure and saves on infra costs by virtually scaling your application instances from zero.
Kubernetes, on the other hand, provides its advantages with zero limitations, following a traditional hosting model and advanced traffic management techniques that help you do things — like blue-green deployments and A/B testing.
Knative is an attempt to create the best of the two worlds. As an open-source cloud-native platform, it enables you to run your serverless workloads on Kubernetes, providing all Kubernetes capabilities, plus the simplicity and flexibility of serverless. …