Kubernetes Is What ?

Nethmi Nikeshala
MS Club of SLIIT
Published in
7 min readJun 15, 2023

Hi guys how are you. I hope you are all doing well. Today I thought of writing about Kubernetes for those who want to enter DevOps field.

How Kubernetes work ?

In Kubernetes, there is a master node and multiple worker nodes, each worker node can handle multiple pods.

  • Pods are just a bunch of containers clustered together as a working unit. We can start designing our applications using pods.
  • Once our pods are ready, we can specify pod definitions to the master node, and how many we want to deploy. From this point, Kubernetes is in control.
  • It takes the pods and deploys them to the worker nods. If a worker node goes down, Kubernetes starts new pods on a functioning worker node.
  • This makes the process of managing the containers easy and simple.
  • It makes it easy to build and add more features and improving the application to attain higher customer satisfaction.
k8s Architecture

What is the Master node and Worker node in #Kubernetes?

#Containerization is the trend that is taking over the world, allowing firms to run any kind of different applications in a variety of different environments. To keep track of all these containers, to schedule, to manage, and to orchestrate them, we all require an orchestration tool. Kubernetes does it exponentially well.

Kubernetes is a master-slave type of architecture. It operated with Master node and worker node principles.

What Exactly They Do?

Master Node:

The main machine that controls the nodes

Main entry point for all administrative tasks

It handles the orchestration of the worker nodes

Worker Node:

It is a worker machine in Kubernetes (used to be known as a minion)

This machine performs the requested tasks. The Master Node controls each Node

Runs containers inside pods

This is where the Docker engine runs and takes care of downloading images and starting container

#Containers are the de-facto deployment format of today. But where does #Kubernetes comes in the play?

  • While tools such as #Docker provide the actual containers, we also need tools to take care of things such as replication, failovers, orchestration, and that is where Kubernetes comes into play.
  • The Kubernetes API is a great tool for automating a deployment pipeline. Deployments are not only more reliable, but also much faster, because we’re no longer dealing with VMs.
  • When working with Kubernetes, you have to become accustomed with concepts and namings like pods, services, and replication controllers. If you’re not already familiar yet, no worries, there are some excellent resources available to learn Kubernetes and get up to speed.
  • Some key features of Kubernetes that make it unique,

Service Discovery

Health Check Capability

Simplified Monitoring

Self-healing Secret and configuration management

Horizontal scaling

Storage Management

Networking Services

Logging Rolling update or rollback

Load balancing

— — — Let’s see what are the K8s Features —— —

Container Orchestration: Kubernetes provides a platform for automating the deployment, scaling, and management of containerized applications.

High Scalability: Kubernetes allows you to scale applications seamlessly by adding or removing containers based on demand.

Self-Healing: Kubernetes monitors the health of containers and automatically restarts or replaces failed containers, ensuring that the desired state of the application is maintained.

Load Balancing and Service Discovery: Kubernetes distributes network traffic to containers using built-in load balancing mechanisms and provides service discovery for efficient communication between containers.

Automatic Bin Packing: Kubernetes optimizes the placement of containers on nodes to maximize resource utilization, making efficient use of available compute resources.

Rolling Updates and Rollbacks: Kubernetes supports rolling updates, allowing you to update containers without downtime. It also enables you to roll back to previous versions if an update causes issues.

Storage Orchestration: Kubernetes offers flexible storage options, allowing you to mount various storage systems to containers, such as local storage, network-attached storage (NAS), or cloud-based storage solutions.

Secrets and Configuration Management: Kubernetes provides a secure way to manage sensitive information and configuration data, such as API keys, passwords, and environment variables, using secrets and ConfigMaps.

Multi-Node Clustering: Kubernetes allows you to create clusters consisting of multiple nodes, enabling fault tolerance, high availability, and load distribution across the cluster.

Horizontal and Vertical Scaling: Kubernetes supports both horizontal scaling (increasing the number of containers) and vertical scaling (increasing the resources allocated to containers) to meet varying application demands.

Application Lifecycle Management: Kubernetes manages the entire application lifecycle, from initial deployment to scaling, updating, and termination, providing a consistent and reliable environment for running applications.

Monitoring and Logging: Kubernetes integrates with various monitoring and logging solutions, allowing you to gather metrics and logs from containers, nodes, and other cluster components for better visibility and troubleshooting.

Resource Allocation and Quotas: Kubernetes provides mechanisms for allocating compute resources (CPU, memory, etc.) to containers and enforcing resource quotas to ensure fair distribution and prevent resource exhaustion.

Extensibility and Customization: Kubernetes is highly extensible and offers a rich ecosystem of plugins, extensions, and custom resource definitions (CRDs) to adapt and extend its functionality to specific use cases.

Community and Vendor Support: Kubernetes has a large and active open-source community, as well as support from various cloud providers and technology vendors, providing resources, documentation, and expertise to help with adoption and usage.

Kubernetes Set up:

Certainly! Here are the steps to set up Kubernetes:

  1. Choose a Deployment Method:
  • Self-Hosted: Set up and manage your own Kubernetes cluster on physical or virtual machines.
  • Managed Service: Use a cloud provider’s managed Kubernetes service, such as Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Microsoft Azure Kubernetes Service (AKS).

2. Set up the Infrastructure:

  • Provision the required machines or cloud instances to serve as the Kubernetes cluster nodes.
  • Ensure the nodes have sufficient resources (CPU, memory, storage) to run the desired workload.

3. Install Container Runtime:

  • Choose a container runtime, such as Docker, Containerd, or CRI-O, and install it on each cluster node.
  • Configure the container runtime to work with Kubernetes.

4. Install Kubernetes Control Plane:

  • The control plane includes components like the API server, controller manager, scheduler, and etcd (distributed key-value store).
  • Install and configure these components on dedicated control plane nodes or distribute them across worker nodes.

5. Set up Networking:

  • Choose a networking solution that allows communication between pods and external services.
  • Common options include Kubernetes’ built-in networking (kube-proxy), Calico, Flannel, or Cilium.
  • Install and configure the networking solution to work with your Kubernetes cluster.

6. Join Worker Nodes:

  • Configure each worker node to join the Kubernetes cluster by connecting to the control plane.
  • This typically involves running a command with the appropriate parameters and authentication tokens.

7. Enable Load Balancing:

  • If required, set up a load balancer to distribute traffic to the Kubernetes cluster nodes.
  • This ensures that external services can access the applications running on the cluster.

8. Configure Storage:

  • Determine the storage requirements for your applications (persistent volumes) and select a storage solution compatible with Kubernetes.
  • Install and configure the storage solution, such as NFS, iSCSI, or cloud-based storage.

9. Set up Authentication and Authorization:

  • Configure authentication mechanisms (e.g., username/password, certificates) and authorization policies to control access to the Kubernetes cluster and its resources.
  • Integrate with external identity providers if necessary.

10. Install Monitoring and Logging:

  • Set up monitoring and logging tools to gather metrics and logs from the Kubernetes cluster, including applications, nodes, and cluster components.
  • Popular options include Prometheus, Grafana, and the Elastic Stack (Elasticsearch, Logstash, and Kibana).

11. Test and Validate:

  • Deploy sample applications or workloads to verify that the Kubernetes cluster is functioning correctly.
  • Perform thorough testing to ensure that the cluster can scale, handle failures, and recover as expected.

12. Continuous Maintenance and Upgrades:

  • Regularly apply updates and security patches to the Kubernetes cluster components, container runtime, and worker nodes.
  • Monitor the cluster’s health, performance, and resource utilization to proactively address any issues.

It’s important to note that the setup process may vary depending on your chosen deployment method, the specific tools and technologies you use, and the requirements of your environment. It’s recommended to refer to the official documentation and resources specific to your deployment scenario for detailed instructions

K8s SERVICES

Kubernetes (K8s) provides several security features and best practices to help protect your containerized applications and the cluster itself.

Here are some key aspects of Kubernetes security:

Role-Based Access Control (RBAC)

Pod Security Policies (PSP)

Network Policies

Secrets Management

Container Image Security

Secure Cluster Communication

Auditing and Logging

Admission Controllers

Pod Isolation and Sandboxing

Updates and Patch Management

Remember that ensuring Kubernetes security is a multi-layered approach that involves a combination of configuration, policies, monitoring, and ongoing maintenance. It’s important to follow the latest security recommendations from the Kubernetes community, cloud providers, and security experts to protect your cluster and applications.

Thanks for Reading…. ………………………………

--

--

Nethmi Nikeshala
MS Club of SLIIT

DevOps Engineer | UG | Junior Researcher | Sri Lanka Institute of Information Technology | Wayamba University