Kubernetes for Application Developer

Kubernetes is an open source system for automating deployment, scaling and management of containerized application. The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014.

Why we need Kubernetes:

  • Service discovery and load balancing: Kubernetes exposes container using DNS name or IP address. If there are heavy traffic then Kubernetes load balances traffic so network is stable.
  • Storage orchestration: Kubernetes provides support to mount different storage system like local storage, cloud provided storage and others.
  • Automated rollouts and rollbacks: Kubernetes provides support so desired state of the containers can be configured. Kubernetes automatically converts desired state to actual state in a controlled manner. Kubernetes can add new containers, move resources to new containers, remove old containers and deployments can be switched to any of the previous deployed revision.
  • Resource optimization: We can tell Kubernetes that how much resources(cpu and memory) are required for each container and Kubernetes can fit containers onto nodes(virtual machine) to make the best use of resources.
  • Self-healing: Kubernetes automatically restarts container, kills container which are not responding, does health check and publishes containers to outside world only if it is ready.
  • Secret and configuration management: Kubernetes provides support to store and manage sensitive information. We can deploy and manage secrets without rebuilding container images.

Kubernetes Architecture:

Node: It is a physical or virtual machine where Kubernetes is installed. It was known as minions in the past.

Cluster: Set of nodes grouped together. If a node goes down other nodes in cluster handles requests. Multiple nodes helps in sharing loads as well.

Two types of Node:

  • Master: Responsible for managing the cluster.
  • Worker: Machines where containers are launched by Kubernetes.

When we install Kubernetes, we install following in the machine:

  • apiserver: It is frontend for Kubernetes which expose Kubernetes APIs. It is designed to scale horizontally — that is, it scales by deploying more instances. We can run several instances of apiserver and balance traffic between those instances
  • etcd: It is a distributed reliable key-value store used by Kubernetes to store all the data used to manage the cluster.
  • scheduler: It is responsible for distributing work or containers across multiple nodes. It looks for new containers and assigns them to nodes.
  • controllers: It is brain of the Kubernetes components. It is responsible for noticing and responding when nodes, container or end points goes down. The controllers makes decision to bring up new containers in such cases.
  • container runtime: It is underline software which is used to run containers. Docker, containerd, cri-o, rktlet are the container runtimes.
  • kubelet: It is the agent which runs on each worker node of cluster. It makes sure that containers are running on nodes as expected.
  • kube-proxy: It is a network proxy that runs on each node in the cluster, implementing part of the Kubernetes service concept. kube-proxy maintains network rules on nodes and used to access pods from within or outside of cluster.
  • cloud controller manager: It is based on plugin mechanism that allows new cloud providers to integrate with Kubernetes easily by using plugins. It allows cloud specific vendor code and the Kubernetes core to evolve independent of one another

Add Ons:

  • Web UI: it is web based ui for Kubernetes clusters and used to manage and troubleshoot applications running in the cluster.
  • Monitoring: It records metrics about the containers in cluster and provides UI for browsing the data.
  • DNS: DNS for Kubernetes clusters:
  • Logging: A cluster level mechanism is responsible for saving container logs to a central log store with search/browsing interface.

kubectl:

It is Kubernetes command line utility. This tool is used to deploy and manage applications on Kubernetes cluster, e.g. get cluster information, to get status of nodes in the cluster, set environments variables, mount storage and manage many other things.

Kubernetes Objects:

Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of cluster. Specifically Kubernetes objects describe:

  • What containers are running
  • Resources available to these containers
  • The policies around these application.

Kubernetes objects are configured in ‘.yaml’ file. Following fields are generally defined in yaml file for Kubernetes objects:

  • apiVersion — Version of the Kubernetes API used to create object
  • kind — What kind of object i.e. Pod, Deployment, Service etc.
  • metadata — Data that helps uniquely identify the object, including a name string, UID, and optional namespace
  • spec — desired state of the object

Kubernetes local Set up:

Minikube: It is a single node Kubernetes server.

  • install brew: https://brew.sh/
  • install minikube: brew install minikube
  • install kubectl: brew install kubectl
  • start minikube server: minikube start
  • check kubectl version: kubectl version

If your ‘kubectl’ command is working, means your local Kubernetes set up in place.

Imperative Commands:

A user directly objects on live objects on cluster using imperative commands. for imperative commands user provides operations as args or flags to kubectl command.

  • Create an object:
kubectl create -f <path of yaml file>Examples:Create the object defined in nginx.yaml
kubectl create -f nginx.yaml
  • Delete an object:
kubectl delete -f <path of yaml file>
kubectl delete <object type> <object name>
Examples: Delete the object defined in nginx.yaml
kubect delete -f nginx.yaml
Delete a pod having name foo
kubectl delete pod foo
Delete a deployment having name bar
kubectl delete deployment bar
  • Replace an object:
kubectl replace -f <path of yaml file>Examples:Replace pod with new definition defined in nginx.yaml
kubectl replace -f nginx.yaml
  • Get objects:
kubectl get <object type> <object name>Examples:Get all pods
kubectl get pods
Get detail of pod foo
kubectl get pod foo
Get all pod in marketing namespace
kubectl get pods -n marketing
Get pods in all namespaces
kubectl get pods --all-namespaces
Get deployment called bar
kubectl get deployment bar
Get wide detail of pod foo
kubectl get pod foo -o wide
Get detail for a pod called foo in yaml format
kubectl get pod foo -o yaml
Get detail for a pod called foo in json format
kubectl get pod foo -o json
Get pods sorted by name
kubectl get pods --sort-by=.metadata.name
  • Describe objects:
kubectl describe <object type> <object name>Examples:Describe pod nginx
kubectl describe pod nginx
Describe deployment foo
kubectl describe deployment foo

Kubernetes Objects in Detail:

Pods: It is the smallest object which can be created in Kubernetes. Containers are not directly deployed in Kubernetes rather they are encapsulated in a pod. Containers inside a pod share storage/network and information about how to run a container.

kubectl run --generator='run-pod/v1' <pod name> --image=<image name> Example: Create a pod named foo using image nginx in default name space
kubectl run --generator='run-pod/v1' foo --image=nginx

Yaml file:

apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: nginx

Replication Set: To get high availability we want more than one instances of pods running inside our cluster at a same time, so if one of the pod goes down, other pods are available. Replication Controller or Replication Set is used for this purpose.

Below is an example of replication set which runs 5 instances of an image ‘gcr.io/google_samples/gb-frontend’

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
replicas: 5
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis

image: gcr.io/google_samples/gb-frontend:v3

Deployment: It is used in production environment and provides declarative updates to pods and replica set. When we create a deployment, corresponding replica set and pods are also created. The name of these replica set and pods are prefixed by deployment name.

kubectl create deployment <deployment name> --image=<image name> Example:Create a deployment with image nginx:1.7.8, called nginx 
kubectl create deployment nginx --image=nginx:1.7.8
Just create the yaml in a file called deploy.yaml
kubectl create deployment nginx --image=nginx:1.7.8 --dry-run -o yaml > deploy.yaml
Scale the above deployment to 2 replicas
kubectl scale --replicas=2 deployment nginx

Yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8

Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. The name of a resources within a namespace should be unique.

kubectl create namespace <namespace name>Exampes:Create a name space called dev
kubectl create namespace dev

Yaml file:

apiVersion: v1
kind: Deployment
metadata:

name: dev

Commands and args: In Kubernetes command is equivalent to ‘ENTRYPOINT’ and args is equivalent to ‘CMD’ in docker. Values of command can not be overrided from command line but value of args can be overrided from command line.

Docker file:FROM ubuntu
ENTRYPOINT sleep
CMD 5
docker run ubuntu -> runs command 'sleep 5' on start up
docker run ubuntu 10 -> runs command 'sleep 10' on start up
ENTRYPOINT is equivalent to 'command' in Kubernetes
CMD is equivalent to 'args' in Kubernetes
Example: Create an busybox pod which sleeps for 60 second and prints 'hello world' on startup
kubectl run --generator='run-pod/v1' busybox --image=busybox --dry-run -o yaml -- /bin/sh -c 'echo Hello World' > busybox.yaml
vi busybox.yaml
add 'command: ["sleep", "3600"]' inside busybox container
kubectl create -f busybox.yaml

Yaml file:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- command: ["sleep", "60"]
args:
- /bin/sh
- -c
- echo Hello World
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always

Env Variables: Environment variables can be added to a container in following ways:

Name/value pair:

spec:
containers:
- name: busybox
image: busybox
env:
- name: <env_name>
value: <value>

From ConfigMap:

spec:
containers:
- name: busybox
image: busybox
env:
- name: <env_name>
valueFrom:
configMapKeyRef:
name: <configmap name>
kay: <a key of the configmap above>
envFrom:
- configMapRef:
# env is created from all the keys of config map.
name: <configmap name>

From Secret:

spec:
containers:
- name: busybox
image: busybox
env:
- name: <env_name>
valueFrom:
configMapKeyRef:
name: <secret name>
kay: <a key of the secret above>
envFrom:
- secretRef:
# env is created from all the keys of config map.
name: <secret name>

ConfigMap: The configuration parameters for the containers are created using configmap. It stores configuration data in plain text format.

Create a configmap from key/value pairs
kubectl create configmap <configmap name> --from-literal=key1=value1 --from-literal=key2=value2
Create a config map from the configuration defined in file
kubectl create configmap <configmap name> --from-file=<file map>
Create a config map from the env file name:
kubectl create configmap <configmap name> --from-env-file=<env file name>
Example:Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'kubectl create configmap options --from-literal=var5=val5
kubectl run --generator='run-pod/v1' nginx --image=nginx --dry-run -o yaml > pod.yaml
vi pod.yaml

Yaml file:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
env:
- name: option
valueFrom:
configMapKeyRef:
name: options
key: var5
dnsPolicy: ClusterFirst
restartPolicy: Always

Secret: It is similar to configmap but values are encrypted and then stored.

kubectl create secret generic <secret name> --from-literal=key1=value1 --from-literal=key2=value2

Examples:
Create a secret called mysecret with the values user=MJ and password=mypass

kubectl create secret generic mysecret --from-literal=user=MJ --from-literal=password=mypass

Yaml file for the secret(Notice values are encoded):

apiVersion: v1
data:
password: bXlwYXNz
user: TUo=
kind: Secret
metadata:
creationTimestamp: "2020-01-01T10:59:31Z"
name: mysecret
namespace: default
resourceVersion: "723"
selfLink: /api/v1/namespaces/default/secrets/mysecret
uid: 0e79dfe6-dda8-466b-800f-26fc81c298f8
type: Opaque
---------------------
Get the values: password:
echo bXlwYXNz | base64 -d
user:
echo TUo= | base64 -d
-----------------------Create an nginx pod that mounts the secret mysecret in a volume on path /etc/foo
kubectl run --generator='run-pod/v1' nginx --image=nginx --dry-run -o yaml> pod.yaml
vi pod.yaml

Yaml file for nginx pod:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
volumes:
- name: secret-vol
secret:
secretName: mysecret
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /etc/foo
name: secret-vol
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

Security Context: Containers run as root user by default. Using security context, runAsUser and runAsGroup for pods or containers is defined. Security context configuration on container is given precedence over configuration on pod.

Create the YAML for an nginx pod that runs with the user ID 101, Group Id 1 and capabilities "NET_ADMIN", "SYS_TIME" added to its container
kubectl run --generator='run-pod/v1' nginx --image=nginx --dry-run -o yaml> pod.yaml
vi pod.yaml

Yaml file:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
securityContext:
runAsUser: 1001
runAsGroup: 1
containers:
- image: nginx
name: nginx
resources: {}
securityContext:
capabilities:
add: ["SYS_ADMIN", "SYS_TIME"]
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

Service Account: Service account is used by application to interact with Kubernetes clusters. Whenever a service account is created a corresponding token (secret object) is created and linked to the service account.

kubectl create serviceaccount <service account name>Example:
Create a serviceaccount myuser and create a busybox pod which uses this service account
kubectl create serviceaccount myuser
kubectl run --generator='run-pod/v1' busybox --image=busybox --serviceaccount=myuser

Yaml file:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
serviceAccountName: myuser
status: {}

Requests and Limits: Containers can set up resource requirements based on that it is placed on the appropriate nodes. We can put limits on the resources so pods can not use more cpu and memory than the specified one. If pods tries to access more cpu than the specified limit then it simply throttles. In case of memory, if pods tries to access more memory than specified it simply crashes.

Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi
kubectl run --generator='run-pod/v1' nginx --image=nginx --limits="100m,memory=256Mi" requests="cpu=200m,memory=512Mi"

Yaml file:

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- args:
- requests=cpu=200m,memory=512Mi
image: nginx
name: nginx
resources:
limits:
cpu: 100m
memory: 256Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

Taint and Tolerations: Using taints and toleration relationship between pods and nodes are maintained and it is defined what nodes can accept what pods. If a node is tainted then only those pods can be placed on this node which has toleration for this taint.

taint a node:
kubectl taint node <node name> key=value:taint-effect
Taint effect defines next action if a pod can not tolerate a taint.
Values of taint-effect are:
NoSchedule: Pods will be not scheduled on the node
PreferSchedule: System will try not to schedule intolerant pod on a node buy it is notb guaranteed.
NoExecute: Existing intolerant pod on a node will be evicted before taint is applied on node.
Examples:
Taint a node nginx with app=blue and taint-effect as 'NoSchedule'
kubectl taint node nginx app=blue:NoSchedule
Corresponding tolerations for the pod:Yaml file:-----------------------------
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: "app"
operator: "Equal"
value: "blue"
effect: "NoSchedule"

Node Affinity: Using NodeAffinity, it is defined as what pods can be placed in what nodes. Node Affinity limits pods placement on specific nodes. It provides advance operators like ‘In ‘, ‘Exists’, ‘Or’ etc.

Node Affinity for a pod which can be placed on only those nodes which has 'app=Blue' as level.Yaml file:
------------------------------
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- name: nginx
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoreduringExecution:
nodeSelectorterms:
- matchExpressions:
- key: size
operator: In
values:
- Large
-----------------------------
Node Affinity Types:
requiredDuringSchedulingIgnoredduringExecution
preferDuringSchedulingIgnoreduringExecution

Readiness and Liveness probe:

Readiness probe: Some of the containers takes time during startup and during this startup time it is available to other service by default and it results in error. Using readiness probe if a pod is not ready it will not receive traffic through Kubernetes service.

Liveness Probe: Liveness probe checks health of the application and if configured liveness checks fails then it restarts the application.

Example:Create an busybox pod (that includes port 80) with an HTTP readinessProbe on path '/ready' on port 80, It also also check health of application using livenessProbe at '/live' on port 80Yaml file:---------------------------------apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- name: busybox

image: busybox
readinessProbe:
httpGet:
path: /ready
port: 80
livenessProbe:
httpGet:
path: /live
port: 80
----------------------------------Other important parameters of probes:initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

Logging: It is used to view the logs of pods/ container.

View the logs of a pod
kubectl logs <pod name>
For multi pod container
kubectl logs <pod name> -c <container name>
Examples:View the logs of nginx pod
kubectl logs nginx
View the logs of container busybox1 of busybox pod
kubectl logs busybox -c busybox1

Monitoring: Monitoring server must be installed to get monitoring information about pod.

Get top node
kubectl top node
Get top pod
kubectl top pod
Example: get namespace and name of node which is consuimg most of cpu
kubectl top node --sort-by=cpu --custom-columns='Namespace:.metadata.namespace,NAME:.metadata.name'

Labels: Labels are used to tag a Kubernetes objects using key value pair and selectors are used to select the Kubernetes objects using labels.

Annotations are used to tag Kubernetes objects using key/value pair. Annotation are not used for selection of Kubernetes objects.

Label an object
kubectl label <object type> <object name> <key>:<value>
Update label of an object
kubectl label <object type> <object name> <key>:<value> --overwrite
Remove a label from an object
kubectl label <object type> <object name> <key>-
Similarly for annotation use 'kubectl annotate' in stead of 'kubectl label'
Examples:Label nginx pod with app=label
kubectl label pod nginx app=blue
Update label app to green for nginx pod
kubectl label pod nginx app=green --overwrite
Remove label app from nginx pod
kubectl label pod nginx app-

NodeSelector:

Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'Yaml file:---------------------------------apiVersion: v1
kind: Pod
metadata:
name: cuda-test
spec:
containers:
- name: cuda-test
image: "k8s.gcr.io/cuda-vector-add:v0.1"
nodeSelector: # add this

accelerator: nvidia-tesla-p100

Rollout: It is used to view the history of deployment, undo deployment to a specific revision, check the status of deployment. We have different rollout strategy available, Recreate rollout strategy — all the existing pods are deleted before creating the new ones. in RollingUpdate strategy, pods are deleted and created in a phased manner.

View the status of deployment
kubectl rollout status deployment <deployment name>
Undo deployment to previous version
kubectl rollout undo deployment <deployment name>
Undo deployment to a specific revision
kubectl rollout undo deployment <deployment name> --to-revision=<revision no>
View the history of deployment
kubectl rollout history deployment <deployment name>

Jobs: A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.

Create a job
kubectl create job <job name> -- <args>
Example:Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'. Terminate the job if it takes more than 30 seconds to execute, job should run 20 times in parallel 5 times.
kubectl create job busybox --image=busybox --dry-run -o yaml -- /bin/sh -c 'echo hello;sleep 30;echo world' > job.yaml
vi job.yaml

Yaml file:

apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: busybox
spec:
activeDeadlineSeconds: 30
completions:5
parallelism:5
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/sh
- -c
- echo hello;sleep 30;echo world
image: busybox
name: busybox
resources: {}
restartPolicy: Never
status: {}

CronJob: A scheduled job is called cron job.

Create a cron job
kubectl create cronjob <cron job name> --schedule=<schedule time> -- <args>
Example:Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from Kubernetes' to standard outputkubectl create cronjob busybox --image=busybox --schedule="*/1 * * * *" -- /bin/sh -c 'date; echo Hello from Kubernetes'

Yaml file:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: null
name: busybox
spec:
jobTemplate:
metadata:
creationTimestamp: null
name: busybox
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/sh
- -c
- date; echo Hello from Kubernetes
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
schedule: '*/1 * * * *'
status: {}

Persistent Volumes and Claim:

Volumes: Volumes are storage mounted with a pod. If a pod is deleted corresponding volume is also deleted along with pod. It is not recommended to use in the multi node cluster.

Volume example:Yaml file:
-------------
apiVersion: v1
kind: Pod
metadata:
name: volume-demo
spec:
containers:
- name: volume-demo
image: alpine
command: ["/bin/sh", "-c"]
args: ["shuf -i 0-100 -n 1 >> /opt/number.out;"]
volumeMounts:
- name: data-volume
mountPath: /opt
volumes:
- name: data-volume
hostPath:
path: /data
type: Directory

Persistent Volume(PV): It is a pool of storage in a cluster configured by administrator to be used by user’s deploying applications on the cluster.

Create a persistent volume OF STORAGE 5Gi and access mode- ReadwriteMany:Yaml file:-----------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo-pv
labels:
release: stable
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle

storageClassName: slow
-----------------------------------
Values for persistentVolumeReclaimPolicy:
Recycle: It performs a basic scrub on the volume and makes it available again for a new claim.
Retain: When the PVC is deleted, PV still exists and volume is claimed 'released".
Delete: It deleted both PV and corresponding storage asset.
Values for access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes

Persistent volume claim(PVC): It is request for storage by user. PVC consumes PV resources. Claims can request specific size and access modes based on that PV is allocated.

Create a PVC of 5Gi storage request:Yaml file:-------------------------------apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
storageClassName: slow
selector:
matchLabels:
release: stable
---------------------------------
selector:
matchLabels - the volume must have a label with this value
matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.

Mount the persistent claim to a pod at mount path /var/www/html:

apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:

claimName: myclaim

Services: Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector.

Service Types:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIPService, to which the NodePort Service routes, is automatically created. The NodePort Service, from outside the cluster is accessed by <NodeIP>:<NodePort>

Add alt text

apiVersion: v1
kind: service
metadata:
name: myapp-services
spec:
type:NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30008
selector:
app: myapp
type: frontend
--------------------------- selector defines labels for pod on which service is applied.
  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

Ingress Networking: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Internet -> Ingress -> Service

There must have an ingress-contoller to satisfy an Ingress. Only creating an Ingress resource has no effect. Ingress controller such as ingress-nginx must be deployed for ingress .

If request is http://foo.bar.com/foo then service having name 'service1' on port 4200 will handle the request and if http://foo.bar.com/bar then service having name 'service2' on port 4200 will handle the request.Yaml file:------------------------------------apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2

servicePort: 4200

Network Policy: A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.

PolicyTypes:

Ingress: From pods from which a pod can be accessed

Egress: To pods which are accessed by a pod.

An pod having label 'role=app' in the default namespace can be accessed by pod having label 'role=frontend' in the 'myproject' namespace.  
Similarly pod having label 'role=app' in the default namespace can access pod having label 'role=db' in the 'myproject' namespace.
Yaml file:-----------------------------------------apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: db
ports:
- protocol: TCP

port: 5978
---------------------------------------Example:Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: true' can access the deployment and apply itkubectl create deployment nginx --image=nginx
kubectl scale --replicas=2 deployment nginx
kubectl expose deployment nginx --port=80
vi policy.yaml
Yaml for network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels:
access: 'true'

It is a very long document but I hope i covered all the topics which an app developer using Kubernetes in his/her project should know.

If you need to know more about Kubernetes please refer Kubernetes docs at: https://kubernetes.io/docs/

--

--

--

Staff Software Engineer at Intuit

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

👨‍💻Image Processing using Python Library OpenCV

How to install Nginx web server on CentOS 8

Information Architecture for Technical Documentation

Industry use-cases of Azure Kubernetes Service

The Software Development Process Is Broken

Integrating SiriKit Payment Intents Into Your App

Coding Bootcamp Survival Tips

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mritunjay Kumar

Mritunjay Kumar

Staff Software Engineer at Intuit

More from Medium

Avoiding CI/CD dependency conflicts by using Containers

Easy approach for implementing CI/CD using Jenkins-Part 1

1:1 with docker

What is Kubernetes?