Kubernetes: How to Orchestrate the Cloud

Basma A
The Startup
Published in
8 min readMay 11, 2020

In this story, we are going to see how Kubernetes helps us in deploying our microservices and takes care of them.

  1. In Google Cloud cloud shell, first lets set the default zone
BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud config set compute/zone us-central1-b

2. Create k8s cluster

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud container clusters create binarymonster

3. Get the sample code

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git

4. Our first deployment

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create deployment nginx --image=nginx:1.10.0

Kubernetes has created a deployment with single instance of the nginx container

In Kubernetes, all containers run in a pod. Use the kubectl get pods command to view the running nginx container:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get pods

Once the nginx container is running you can expose it outside of Kubernetes using the kubectl expose command:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl expose deployment nginx --port 80 --type LoadBalancer

Kubernetes created an external Load Balancer with a public IP address attached to it. Any client who hits that public IP address will be routed to the pods behind the service. In this case that would be the nginx pod.

List our services now using the kubectl get services command:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get services

You can hit the nginx container:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ curl http://<External IP>:80

Pods

At the core of Kubernetes is the Pod.

Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other, you package the containers inside a single pod.

Pods also have Volumes. Volumes are data disks that live as long as the pods live, and can be used by the containers in that pod. Pods provide a shared namespace for their contents which means that the two containers inside of our example pod can communicate with each other, and they also share the attached volumes.

Pods also share a network namespace. This means that there is one IP Address per pod

Lets create a Pod

Pods can be created using pod configuration files. Let’s take a moment to explore the monolith pod configuration file

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ cat pods/monolith.yamlapiVersion: v1 
kind: Pod
metadata:
name: monolith
labels:
app: monolith
spec:
containers:
- name: monolith
image: kelseyhightower/monolith:1.0.0
args:
- "-http=0.0.0.0:80"
- "-health=0.0.0.0:81"
- "-secret=secret"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"

There’s a few things to notice here. You’ll see that:

  • your pod is made up of one container (the monolith).
  • you’re passing a few arguments to our container when it starts up.
  • you’re opening up port 80 for http traffic.

2. Create the monolith pod using kubectl:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f pods/monolith.yaml

3. Use the kubectl get pods command to list all pods running in the default namespace:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get pods

4. use kubectl describe command to get more information about the monolith pod:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl describe pods monolith

Interacting with Pods

By default, pods are allocated a private IP address and cannot be reached outside of the cluster. Use the kubectl port-forward command to map a local port to a port inside the monolith pod.

Open two Cloud Shell terminals. One to run the kubectl port-forward command, and the other to issue curl commands.

In the 2nd terminal

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl port-forward monolith 10080:80

In the 1st terminal start talking to your pod using curl:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ curl http://127.0.0.1:10080 

try out the authentication service

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token')

Enter the super-secret password “password” again when prompted for the host password.

Now hit the secure endpoint with curl:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure

Use the kubectl logs command to view the logs for the monolith Pod.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl logs monolith

use the -f flag to get a stream of the logs happening in real-time:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl logs -f monolith

Use the kubectl exec command to run an interactive shell inside the Monolith Pod. This can come in handy when you want to troubleshoot from within a container:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl exec monolith --stdin --tty -c monolith /bin/sh

try the below inside the container

ping -c 3 google.com
exit

Services
Pods aren’t meant to be persistent. They can be stopped or started for many reasons — like failed liveness or readiness checks — and this leads to a problem:

What happens if you want to communicate with a set of Pods? When they get restarted they might have a different IP address.

That’s where Services come in. Services provide stable endpoints for Pods.

Services use labels to determine what Pods they operate on. If Pods have the correct labels, they are automatically picked up and exposed by our services.

The level of access a service provides to a set of pods depends on the Service’s type. Currently there are three types:

  • ClusterIP (internal) -- the default type means that this Service is only visible inside of the cluster,
  • NodePort gives each node in the cluster an externally accessible IP and
  • LoadBalancer adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it.

Creating a Service

Change the directory

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ cd ~/orchestrate-with-kubernetes/kubernetes

Explore the monolith service configuration file:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ cat pods/secure-monolith.yaml

Create the secure-monolith pods and their configuration data:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create secret generic tls-certs --from-file tlsBinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f pods/secure-monolith.yaml

Explore the monolith service configuration file:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ cat services/monolith.yaml kind: Service 
apiVersion: v1
metadata:
name: "monolith"
spec:
selector:
app: "monolith"
secure: "enabled"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
nodePort: 31000
type: NodePort

Things to note:

  1. There’s a selector which is used to automatically find and expose any pods with the labels “app=monolith” and “secure=enabled”
  2. Now you have to expose the nodeport here because this is how we’ll forward external traffic from port 31000 to nginx (on port 443).

Use the kubectl create command to create the monolith service from the monolith service configuration file:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f services/monolith.yaml

Adding Labels to Pods

Currently the monolith service does not have endpoints. One way to troubleshoot an issue like this is to use the kubectl get pods command with a label query.

We can see that we have quite a few pods running with the monolith label.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get pods -l "app=monolith"

But what about “app=monolith” and “secure=enabled”?

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get pods -l "app=monolith,secure=enabled"

Notice this label query does not print any results. It seems like we need to add the “secure=enabled” label to them.

Use the kubectl label command to add the missing secure=enabled label to the secure-monolith Pod. Afterwards, you can check and see that your labels have been updated.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl label pods secure-monolith 'secure=enabled'
BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get pods secure-monolith --show-labels

Now that our pods are correctly labeled, let’s view the list of endpoints on the monolith service:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl describe services monolith | grep Endpoints

And you have one!

Let’s test this out by hitting one of our nodes again.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ gcloud compute instances listBinaryMonster@cloudshell:~ (gcp-Project-ID)$ curl -k https://<EXTERNAL_IP>:31000

Deploying Applications with Kubernetes

Deployments are a declarative way to ensure that the number of Pods running is equal to the desired number of Pods, specified by the user.

The main benefit of Deployments is in abstracting away the low level details of managing Pods. Behind the scenes Deployments use Replica Sets to manage starting and stopping the Pods. If Pods need to be updated or scaled, the Deployment will handle that. Deployment also handles restarting Pods if they happen to go down for some reason.

Pods are tied to the lifetime of the Node they are created on. In the example above, Node3 went down (taking a Pod with it). Instead of manually creating a new Pod and finding a Node for it, your Deployment created a new Pod and started it on Node2.

We’re going to break the monolith app into three separate pieces:

  • auth — Generates JWT tokens for authenticated users.
  • hello — Greet authenticated users.
  • frontend — Routes traffic to the auth and hello services.

We are ready to create deployments, one for each service. Afterwards, we’ll define internal services for the auth and hello deployments and an external service for the frontend deployment. Once finished you’ll be able to interact with the microservices just like with Monolith only now each piece will be able to be scaled and deployed, independently!

Get started by examining the auth deployment configuration file.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ cat deployments/auth.yaml

(Output)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "kelseyhightower/auth:2.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
...

The deployment is creating 1 replica, and we’re using version 2.0.0 of the auth container.

When you run the kubectl create command to create the auth deployment it will make one pod that conforms to the data in the Deployment manifest. This means you can scale the number of Pods by changing the number specified in the Replicas field.

Anyway, go ahead and create your deployment object:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f deployments/auth.yaml

It’s time to create a service for your auth deployment. Use the kubectl create command to create the auth service:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f services/auth.yaml

Now do the same thing to create and expose the hello deployment:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f deployments/hello.yaml
BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f services/hello.yaml

And one more time to create and expose the frontend Deployment.

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.confBinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f deployments/frontend.yamlBinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl create -f services/frontend.yaml

There is one more step to creating the frontend because you need to store some configuration data with the container.

Interact with the frontend by grabbing it’s External IP and then curling to it:

BinaryMonster@cloudshell:~ (gcp-Project-ID)$ kubectl get services frontend
BinaryMonster@cloudshell:~ (gcp-Project-ID)$ curl -k https://<EXTERNAL-IP>

And you get a hello response back!

--

--

Basma A
The Startup

Forcing myself to write stuff I learn about , while keep it simple and summarized.