Deploying Kubernetes using the bootstrap method and Minikube

Anandshetty
10 min readMar 18, 2024

--

Preparing the Ground : Exploring Monolithic and Microservices Foundations Before Kubernetes

Architecture

Monolithic Architecture:

Definition:

A Monolithic Architecture is traditional where applications integrate a single, tightly-unit approach. All components, including user interface, business logic, and data access, are tightly coupled and run within a single process.

Characteristics:

Single Process: The entire application is developed, built, and maintained within a single codebase.

Tight Coupling: Components are tightly integrated, making it harder to modify or scale individual parts efficiently.

Modularity: Scaling involves duplicating the entire application, which can be inefficient.

Pros:

Easier to develop, test, and deploy initially.

Simplicity in managing monolithic operations.

Cons:

Difficulty in isolating specific components independently.

Maintainability challenges as the application grows.

Flexibility: Changes in one part of the application may affect the entire system.

Microservices:

Definition:

Microservices architecture breaks down a large application into independent services that communicate with each other through a lightweight approach.

Characteristics:

1. Decentralization : Each microservice is an independent entity that can be developed, deployed, and scaled individually.

2. Technology Diversity : Different services can be built using various technologies as long as they adhere to common communication protocols.

3. Scalability : Allows scaling specific services independently based on demand.

Pros:

  1. Scalability : Easier to scale specific services independently
  2. Flexibility : Each service can be developed, deployed, updated independently, allowing freedom to choose the best tools for each task.
  3. Technology Diversity : Different Services can use Different Technologies ,allowing teams to choose tool foe each job

Cons:

  1. Complexity : Managing a distributed system can be more complex than a monolithic architecture.

2. Communication Overhead : Inter-service communication introduces potential latency and complexity.

3. Initial Development Overhead : Setting up a microservices architecture can be more challenging initially.

What is Kubernetes ?

  • Kubernetes is an open-source solution for automating the deployment, scaling, and management of containerized applications.
  • It groups containers that make up an application into logical units for easy management.
  • It is used to deploy applications in a variety of environments, including on-premises, in the cloud, and in hybrid environments. Kubernetes helps facilitate smooth application rollouts, implement new features quickly, and reduce downtime.
Kubernetes Components Architecture

Kubernetes Components:

→ Master Node Components:

  1. API Server : Acts as the Kubernetes control plane’s frontend, accepting user requests and managing cluster state.
  2. Scheduler : Assigns pods to nodes based on resource availability and user-defined constraints.
  3. Controller Manager : Monitors cluster state and ensures that the actual state matches the desired state.
  4. etcd : Distributed key-value store that stores cluster configuration data and state.

→ Worker Node Components:

  1. Kubelet : Node agent that manages pods and containers, ensuring they are running and healthy.
  2. Container Runtime : Software responsible for running containers within pods, such as Docker or containers.
  3. Kube Proxy : Handles network routing and load balancing for services running in the Kubernetes cluster.
  • Pods : The smallest deployable units in Kubernetes, Kubernetes pod offers the flexibility to run containers of different languages within the same unit facilating the composition encapsulating one or more containers along with shared storage and networking resources.

Getting Hands-on with Kubernetes: A Step-by-Step Guide

Kubernetes Installation Option :

1. Development Environment Setup:

  • Introduction to Kubernetes using Minikube.
  • Setting up Kubernetes with Docker Desktop (Unmanaged Installation).

2. Unmanaged Installations:

  • Kubernetes with kubeadm
  • Kubernetes with kops
  • Kubernetes with kubeproxy

3. Managed Platforms:

  • Amazon Elastic Kubernetes Service (EKS): AWS’s managed Kubernetes service.
  • Google Kubernetes Engine (GKE): Google’s managed Kubernetes offering.
  • Oracle Kubernetes Engine (OKE): Oracle’s managed Kubernetes platform.
  • AKS : Azure Kubenetes Service

→ Jumping into Kubernetes: Practical Setup Guide:

Bootstrapping Clusters with Kubeadm:

Steps to Follow :

  • First, we have to launch instances:
  • Take two instances, one for the master and one for the worker.
  • Name both instances ‘kubernetes-K8’.
  • Use the Ubuntu 20.04 AMI.
  • Choose the instance type as t2.medium.
  • take key pair and take 2 instances
  • Launch instances
Launch instance

“K8s-AllPorts-SecurityGroup:

Networking Settings*

- *Grant Security Group Access:* Open port 22 by default.
- *Add Security Group Rules:*
- Default Custom Step: Change port range and source to 0.0.0.0/0.
- 6443: Kubernetes API Server.
- 2379-2380: EtCD Cluster Communication.
- 10250: Kubelet API
- 10259: kube-scheduler.
- 10257: Kube-controller-manager
- 8472: Cluster-wide Network Communication.
- 30000-32767: worker node ranges
  • Launch instances and Rename it as Master and Worker
instances running
  • Switch to root:
sudo su -
  • Change user in Both Server
sudo username master
sudo username worker

Update the server

apt-get update

Install Docker in Both Master and Worker Server

apt-get install docker.io -y

Check docker is installed or not

docker --version
master docker installed
worker docker installed
  • Restart the Docker in Both Server
service docker restart
  • Add Kubernetes Repository Key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

NOTE : After adding key you will get “OK” message

  • Now we will try to see if we are able to reach one package the package is below
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
  • one more time i will update the server
apt-get update
  • Next Step is To Install Services
  • here we install kubeadm and kubectl and kubelet
apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
  • Both Processes are Installed
Install kubeadm kubectl kubelet
  • Kubeadm init in master server only
kubeadm init
  • For Cluster Configuration create .kube in master server
  • Then You Can Any Number of Worker Nodes by Running the kudeadm join
join any number of worker nodes by kubeadm join
  • Before Joining to the cluster we have to create a network
mkdir -p $HOME/.kube

NOTE : For Cluster Configuration I just create .kube in HOME

  • Copy The Admin Configuration to the .kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • Now Change The Ownership
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Install Networking Plugin You Will Get in Internet that is Called Calico
  • Apply the Calico network plugin by running the command
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
  • Deploy the NGINX Ingress Controller by executing the command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/baremetal/deploy.yaml
  • To verify the installation and check the nodes in the cluster, use the command
kubectl get nodes

NOTE :
In conclusion, bootstrapping a Kubernetes cluster with separate master and worker nodes using Kubeadm ensures scalability and resilience for production environments. This method optimizes resource management and workload distribution, enabling efficient container orchestration. Kubeadm’s user-friendly interface simplifies deployment, empowering organizations to fully utilize Kubernetes for their applications.

Deploying Pods on Worker Nodes After Installation :

Kubernetes Deployments :

  • Kubernetes Deployments is a Resource Object in Kubernetes that provide declarative updates to the Applications
  • Kubernetes Deployments allows to describe an applications lifescycle like application image , number of pods & the way to have it
Controll Loop

Kubernetes — YAML Configuration File :

  • The desired state of a Kubernetes cluster is defined in the configuration file.
  • The file is typically in YAML or JSON format. YAML files are commonly used to define and configure various Kubernetes resources such as pods.

Simple YAML Configuration File for a Kubernetes Pod

apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
app: myapp
spec:
containers:
- name: myContainer
image: nginx:latest
ports:
- containerPort: 80

Step 1: Define Kubernetes API Version:

  • Begin by specifying the API version of Kubernetes to use, indicated by ‘apiVersion: v1’.

Step 2: Specify Object Kind:

  • Set the kind of object to create as a ‘Pod’, indicated by ‘kind: Pod’.

Step 3: Provide Metadata:

  • Under ‘metadata’, assign a name to the pod using ‘name: mypod’, and add labels for identification, such as ‘app: myapp’.

Step 4: Define Pod Specification:

  • In the ‘spec’ section, outline the desired state and resources for the pod.

Step 5: Configure Containers:

  • Within the ‘containers’ section, define details for the container to run within the pod.
  • Assign a name to the container (‘name: myContainer’).
  • Specify the Docker image to use for the container (‘image: nginx:latest’).
  • Optionally, configure ports for the container (‘ports’).

Step 6: Finalize Configuration:

  • Ensure all necessary configurations, such as port mappings, are accurately defined.

Step 7: Apply YAML File:

Apply the YAML file using ‘kubectl apply -f filename.yaml’ to create the pod in the Kubernetes cluster.

Deployment : manifest file

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80

In this manifest file:

  • apiversion : app/v1 specifies the Kubernetes API version for Deployments.
  • kind : deployment defines the type of object being created as a Deployment.
  • metadata : section assigns a name to the Deployment.
  • spec : section outlines the desired state and resources for the Deployment.
  • replicas : specifies that the Deployment should manage 3 replicas of the Pod.
  • selectors : defines the labels used to match Pods controlled by the Deployment.
  • template : defines the Pod template used by the Deployment.
  • containers : section specifies details for the container to run within the Pod, including its name, image, and ports.
K8 Deployment YAML

“Deploying NGINX in Pods Using Kubernetes Deployment”

  • Creating a Deployment YAML File for NGINX
vi nginx-deployment.yaml
  • YAML FILE
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
  • Refer to the Kubernetes documentation for detailed instructions and best practices.
  • Create the Deployment File
kubectl apply -f nginx-deployment.yaml
Deployment Created
kubectl get deployments
Pods are created
  • Check All Pods are Running or not
kubectl get pods
Pods are Running
  • Pickup and delete One Pod

Note : After delete It will automatically deploy one new pod that is called control loop

kubectl delete pod podname
Auto Recovery

Setting Up Minikube: Easy Installation of a Single-Node

Minikube is a tool used for locally running Kubernetes clusters on a single machine. It provides a lightweight and easy-to-use solution for developers to experiment, develop, and test Kubernetes applications without needing access to a full-scale production environment. Minikube abstracts away the complexities of setting up a Kubernetes cluster, allowing developers to focus on building and iterating on their applications. It’s particularly useful for testing applications in a Kubernetes-like environment before deploying them to a production cluster, thus streamlining the development process and increasing productivity.

STEPS :

  1. Login to your AWS account and launch one EC2 instance with Ubuntu OS, using the t2.medium instance type, and open all ports for study purposes.
  2. After launching the instance, update it by executing the command
sudo apt-get update

3. After Updating the instance install docker:

sudo apt-get install docker.io

4. Install Dependencies

sudo apt-get update && sudo spt-get install -y apt-transport-https gnupg2 curl

5. Obtain the Kubernetes Repository key

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

6. Now Add the repository to the system

echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list

7. one more time update the server

sudo apt-get update

8. we need to install minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

https://storage.googleapis.com

Installed minikube

9. Give Execute permission

chmod +x minikube
  • minikube now has been become executable

10. Now move the minikube to binary

sudo mv minikube /usr/local/bin/

11. now exit from root & go to bin

cd /usr/local/bin

12. Add user into docker user group

sudo usermod -aG docker $USER && newgrp docker

13. Start minikube

minikube start
minikube started

14. Check the Version

minikube version

15. Try to do again deployment here same nginx deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80

16. After creating deployment file “deployment.yaml” Apply by using below command

minikube apply -f nginx-deployment.yaml 
  • Note : While we run we give that -f so this is confusing that with foarce you overwriting the permissions

17. check commands

minikube version
minikube status

18. Now you do “minikube dashboard” because it will create metric server so you can visual your cluster all at once

minkube dashboard

19. Now enable addons

minikube addons enable metrics-server

20. minikube addons list

minkube addons list

21. Get nodes

minikube kubectl get nodes

22. Run one test pod

kubectl run testpod --image=nginx --restart=Never
kubectl get pods

In conclusion, Kubernetes stands out as a pivotal technology in empowering DevOps mastery. Through its robust features and flexibility, it streamlines container orchestration, enhancing scalability and efficiency in modern software development. By leveraging the bootstrap method and Minikube, developers gain hands-on experience in setting up Kubernetes clusters locally, enabling rapid prototyping and testing. This approach not only fosters a deeper understanding of Kubernetes but also accelerates the journey towards mastering DevOps practices.

--

--

Anandshetty

Experienced in AWS CI/CD and DevOps, I optimize software delivery and contribute to transformative projects with dedication and innovation.