Better Practices
Published in

Better Practices

Kubernetes Tutorial: Your Complete Guide to Deploying an App on AWS with Postman

Use Postman to learn Kubernetes and deploy an app on an AWS cluster

  • Brief Overview on containers
  • Kubernetes
  • Creating a Kubernetes cluster
  • Accessing the Kubernetes API from Postman
  • Authorizing Postman Collection
  • Deploy an app
  • Pod
  • Deployment
  • Expose your app
  • Service
  • Clean up
  • What next?

Prerequisites

Familiarity with REST APIs is a prerequisite. In addition you also need to do the following:

  • Install and configure AWS ALI to configure credentials
  • Install eskctl,a command utility to interact and create the cluster on EKS
  • Install kubectl, a command-line utility to work with Kubernetes clusters
  • Clone Github repository containing .yaml files required in upcoming sections
  • Create your Amazon Cluster IAM role to access the cluster
  • Setting up Amazon EKS cluster VPC
Deploy with Amazon EKS and Kubernetes API Template

A brief overview of containers

Are you familiar with the “But it works on my machine” problem? A lot of times your application doesn’t perform as it does in your local environment. It may be because the production environment has different versions of the libraries, a different operating system, different system dependencies, etc. Containers provide you with a sustainable environment because your application now runs in a box (container) that includes all dependencies required by your app to run and is isolated from other applications running in other containers. They are preferred over virtual machines (VMs) since they use operating system-level virtualization and are lighter than VMs. Docker can be used as the container runtime.

Benefits of containerization

Enter Kubernetes

Each app/service now runs in a container, so there can be a separation of concerns. Services don’t need to be intertwined with each other, and a microservices architecture works best with containerization. We have established why the world is moving towards containers, but who is going to manage these containers and how do you roll out a release? How do you run health checks against your services and get them back up if they are failing? Kubernetes automates all of this for you. You can easily scale up and scale down your services with Kubernetes.

Container orchestration in Kubernetes

Creating a Kubernetes cluster

Assuming you have followed the steps in the prerequisites section, you should have eksctl installed. Follow the steps below to create a Kubernetes cluster.

eksctl get cluster --region us-east-1
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: playground
region: us-east-1
nodeGroups:
- name: ng-1
instanceType: t2.small
desiredCapacity: 2
eksctl create cluster -f cluster.yaml
~/.kube/config
kubectl get nodes

Accessing the Kubernetes API from Postman

Next, we require a service account to communicate with the Kubernetes API. The service account is authorized to perform certain actions, including creating deployments and listing services by attaching it to a cluster role.

kubectl create serviceaccount postman
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: postman-role
rules:
- apiGroups: ["apps", ""] # "" indicates the core API group
resources: ["pods", "services", "deployments"]
verbs: ["get", "watch", "list", "create", "update", "delete"]
kubectl apply -f role.yaml
kubectl create rolebinding postman:postman-role --clusterrole postman-role --serviceaccount default:postman
  • Bearer Token
  • CA Certificate
kubectl describe serviceaccount postman
kubectl get secret <secret-token> -o json
# Extract the token from the Secret and decode
TOKEN=$(kubectl get secret <secret-token> -o json | jq -Mr '.data.token' | base64 -d)
# Extract, decode and write the ca.crt to a file
kubectl get secret <secret-token> -o json | jq -Mr '.data["ca.crt"]' | base64 -d > ca.crt
API server endpoint of Kubernetes cluster on Amazon EKS

Authorizing the Postman Collection

1. Assuming the collection is already imported as part of the prerequisites, select the Manage Environments button on the top right, and edit the following:

Environment variables for AWS environment
Adding authorization to a collection
Adding CA Certificates to access Kubernetes APIs

Deploy an app

We are all set to deploy. First, we need something very important. Not surprisingly, it is an app. Let’s take a look at the app.

Creating a Kubernetes pod

In Kubernetes, pods are a group of containers and also the smallest deployable unit. The pod will define the configuration that is required to create the app container.

Containers in a Kubernetes pod
Creating a Pod POST Request
Visualization of successful creation of a pod
Visualization to list pods created
Deleting a pod request
Visualization of successful deletion of a pod

Deployment

In the previous section, we created a pod and deleted it. That means our app is no longer running. In a real-world scenario, though, we would want to have multiple instances of our app running so that the requests could be load-balanced across them. These instances (pods) could be running on different nodes/machines. We want to ensure that at least a minimum amount of instances are running. Deployments can help us manage it all.

Kubernetes deployment
Creating a Deployment POST Request
  • Labels: Pods are matched to the respective deployments by these labels.
  • Liveness Probe: Containers restart based on the liveness probe. So to check if the Dobby app is up, the /health endpoint will be hit. If it returns a 500 then the containers are restarted.
Visualization of successful deployment creation
Visualization listing pods created via deployment

Expose your app

In the previous section, we were able to successfully create a deployment. So how do we access it now? The pods created have IP addresses, but what if we want to access one app from another?

Service

We have two apps — frontend and backend. There are separate pods for the frontend and for the backend. We need to expose the pods of the backend so that the frontend app can access it and use the APIs. We usually configure the IP address or URL for the backend into the frontend. However, if the IP address changes for the backend, those changes would have to be reflected in the frontend app as well. With services, we can avoid these changes.

Kubernetes Service
Creating a Service POST Request
  • Exposing: Services can be exposed within the cluster for other services (frontend) to access the pods, or outside the cluster for the public to access the service. By default, the service would be of ClusterIP type (this would expose the service inside the cluster). We want to expose this externally, so we chose the LoadBalancer type.
Visualization of successful creation of service
Visualization listing services
const service = pm.response.json().items.filter((item) => item.metadata.name === 
pm.variables.get("project-name")+"-service")[0]pm.collection
Variables.set("service-ip",
service.status.loadBalancer.ingress[0].hostname)
pm.collectionVariables.set("service-port", service.spec.ports[0].port)
Dobby health check request
Visualization message when Dobby is healthy

Clean up

You will find a Clean Up folder in the collection. It contains all the requests to delete the Kubernetes resources — services, deployments, and pods we created throughout this tutorial.

eksctl delete cluster -f cluster.yaml

What next?

Hopefully this tutorial helped you get started with Kubernetes. And there’s so much more to explore. Here are some additional things you can try with this Kubernetes collection:

  • Automate workflows in Postman: You could use the collection runner in Postman to execute common workflows. A workflow in Postman is ordering the requests in a collection to accomplish a task eg. running health checks for all services. Take a look at our blog post about looping through a data file in the postman collection runner and building workflows in Postman.
  • Automate deployments in CI/CD: In case you want to automate deployments, consider using Newman — command line runner for Postman collections. If you prefer kubectl, you could use that instead.
  • Experiment with Dobby APIs to learn more: The Dobby app has APIs such as /control/health/sick, which will make the app return an internal server error. You can run these and observe their effect on the liveness and readiness probes to learn more.
  • Fork collection and raise PR: Additionally, you could also fork the collection and raise a pull request with your changes.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store