Deploying a Web app on Minikube

Xavier Coulon
7 min readOct 24, 2017

--

In this article, we will learn how to deploy a basic web application on Kubernetes.

A few words about the Web app

The supporting web application is a simplistic URL shortener with 2 endpoints: one to create a short URL from a given location, and one to redirect from a given short URL to the corresponding target location. The data is stored in a PostgreSQL database also deployed on Kubernetes.

The web app is written in Golang and already packaged as a Docker Image on Docker Hub. The code for the app is open source and available on GitHub but it is not the subject of this article (also, it will certainly evolve along with this series of articles). One important point to keep in mind is that the database configuration is read from environment variables, as recommended in the Twelve-Factor App Manifesto. We’ll see how these variables can be configured during the deployment, so that the web app can connect to the backend database.

A very quick introduction to Kubernetes

Kubernetes is an open-source platform for deploying, scaling and operating application containers.

A Kubernetes cluster is composed of nodes, where the master node is responsible for managing the state of the cluster. It communicates with the kubelet agents running on all nodes and ensures that the cluster configuration is always up-to-date.

A Pod is the basic building block of Kubernetes. It consists of one (sometimes more) containers that run a element of your application: the web app, the database, etc. Pods run on the nodes.

A Deployment configuration declares the expected state of a Pod. It works in conjunction with Replica Sets to ensure that the expected number of instances of a given pod is available at any time in the cluster, including when scaling up or down and during updates.

A Service exposes a set of pods to other pods within the cluster, or to the outside world. For example, the webapp pods can connect to the database pod using the a db service, without having to care about the actual IP address of the database pod within the cluster.

A Label is a marker on a Kubernetes object (pod, etc.). Labels are used to identify objects. For example, a service will be bound to all pods with a specific label within the namespace.

In order to deploy the application locally, we’ll use Minikube.
Minikube is a tool for developers to run a local, single node Kubernetes cluster on their machine. See the project page on GitHub for information on how to download and install it on macOS, Linux and Windows. You’ll also need the command line tool, Kubectl.

While kubectl provides a command to run Docker images, we’ll use YAML templates to create the Kubernetes objects, as they provide the users with a single, portable and complete definition of the application to deploy (although, templates tend to be verbose).

Deploying the PostgreSQL database

Let’s start with the backend database. In this article, we will deploy a single instance of PostgreSQL without any Persistence Volume, meaning that the Pod running the database will hold all the data. Beware that with such settings, any restart will cause the loss of all data. Running the database with a Persistence Volume will be discussed in a following article of this series.

The template below declares a Deployment object for a container using the postgres:9.6.5 Docker image and exposing its internal port 5432. Note the app:postgres label which will be used later to bind a Service to the database Pod. Also, note how the database name, user and password are specified using environment variables. Actually, the PostgreSQL Docker Image uses these environment variables to configure the database during the container startup, as explained here. While this works fine, we have to admit that this is not a secured approach. Kubernetes provides ConfigMaps and Secrets which will be discussed in a following article of this series.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6.5
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: url_shortener_db
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: mysecretpassword

After applying the template with the kubectl create -f command, we can verify the objects that were created:

> kubectl get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/postgres 1 1 1 1 3s
NAME DESIRED CURRENT READY AGE
rs/postgres-1831265952 1 1 1 3s
NAME READY STATUS RESTARTS AGE
po/postgres-1831265952-458wl 1/1 Running 0 3s

We have a single Pod running the database, along with a Deployment object and its companion Replica Set which together handle the database Pod lifecycle. We can also verify that the Pod has the expected app:postgres label (among others):

> kubectl get po/postgres-1831265952-458wl  -o go-template={{.metadata.labels}}
map[app:postgres pod-template-hash:1831265952]

Lastly, we can verify that the database is effectively available by connecting into the Pod and running the psql command-line tool:

> kubectl exec -it postgres-1831265952-458wl bash
root@postgres-1831265952-458wl:/# psql -U user -d url_shortener_db
psql (9.6.5)
Type "help" for help.
url_shortener_db=# \d
No relations found.

So far, so good! The database is running in a single Pod, and as one would expect, no domain table has been created in the url_shortener_db yet.

Now that the Deployment has been created, we can expose the database as a Service for the Web application:

apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
selector:
app: postgres

The Service is an abstraction to access the pod (or any pod if we had multiple replicas) without having to care about the actual IP address of each Pod.

The selector element in the template binds the Service to create with any pod labeled with app:postgres in the namespace. In this case, the database Pod created by the Deployment above. As a result of the kubectl createcommand, a Service object named postgres has been created:

> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres ClusterIP 10.0.0.140 <none> 5432/TCP 10s

Note that this service has a ClusterIP type, which means that it is only accessible from within the Kubernetes cluster.

Deploying the Web app

Now that we have our database alive and running, we can deploy the web application in one or more Pods, connect it to the database and expose the service to the host so we can play with it.

Once again, we’ll use a YAML template to describe the Deployment. This template has the same structure has the one above, and once again, you can see how the environment variables were passed to the container so it can connect to the database Pod via the Service created earlier.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
template:
metadata:
labels:
app: webapp
spec:
containers:
- image: xcoulon/go-url-shortener:0.1.0
name: go-url-shortener
env:
- name: POSTGRES_HOST
value: postgres
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DATABASE
value: url_shortener_db
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: mysecretpassword
ports:
- containerPort: 8080

Let’s verify the objects that were created (using the app=webapplabel to filter the result):

> kubectl get all -l app=webapp
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/webapp 1 1 1 1 4s
NAME DESIRED CURRENT READY AGE
rs/webapp-3277194567 1 1 1 4s
NAME READY STATUS RESTARTS AGE
po/webapp-3277194567-1sjqb 1/1 Running 0 4s

The Pod status is in a Running status and we can inspect its logs to verify that the application started correctly:

> kubectl logs po/webapp-3277194567-1sjqb
level=info msg="Connecting to Postgres database using: host=`postgres:5432` dbname=`url_shortener_db` username=`user`"
level=info msg="Adding the 'uuid-ossp' extension..."
____ __
/ __/___/ / ___
/ _// __/ _ \/ _ \
/___/\__/_//_/\___/ v3.2.1
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
O\
⇨ http server started on [::]:8080

Exposing the Web app to the Host

Rather than exposing the Pod within the cluster only (as we did with the database), we are going to use a Nodeport type of Service to allow for inbound connections on the Minikube node to reach the web app Pod, so that we can reach the app from a terminal and from a browser on the Host:

apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: NodePort
ports:
- nodePort: 31317
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webapp

The nodePort value is typically in the 30000-32767 range, and each node of the cluster (actually, a single one with Minikube) will map this port to the web app service.

Let’s see the result:

> kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres ClusterIP 10.0.0.140 <none> 5432/TCP 32m app=postgres
webapp NodePort 10.0.0.235 <none> 8080:31317/TCP 2s app=webapp

Cool! Now, the webappservice is available within the cluster under the 10.0.0.235:8080 IP address/port, but it’s also available from the host, at the IP address where Minikube runs and and on port 31317 as we specified in the nodePort element of the template.

It’s now time to play with the application:

> curl -v http://$(minikube ip):31317/ping
pong!
> curl -X POST http://$(minikube ip):31317/ -d "full_url=https://redhat.com"
EZpNfRi
> curl -X GET http://$(minikube ip):31317/EZpNfRi -v
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 31317 (#0)
> GET /EZpNfRi HTTP/1.1
> Host: 192.168.99.100:31317
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< Location: https://redhat.com
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 192.168.99.100 left intact

Minikube also exposes the Kubernetes services along with the VM’s IP address, so we can easily see which services are available from the host (or not) and what URL to use to connect to them:

> minikube service list --namespace sandbox
|-----------|----------|-----------------------------|
| NAMESPACE | NAME | URL |
|-----------|----------|-----------------------------|
| sandbox | postgres | No node port |
| sandbox | webapp | http://192.168.99.100:31317 |
|-----------|----------|-----------------------------|

That’s all for this article. We’ve seen how to deploy a database and expose it as a service within the cluster, then how to deploy a web app, connect it to the database and expose it to the host.

In the next articles, we’ll learn more about Persistence Volumes, ConfigMaps and Secret to improve the resilience and the security of the application.

--

--

Xavier Coulon

Halftime dad of two. Swimmer/cyclist/runner and occasionally triathlete. I develop tools for developers on OpenShift.