Setting up Google Cloud with Kubernetes, Nginx Ingress and Let’s Encrypt (Certmanager)

In this post I will try to share the steps that I took to setup a cluster in kubernetes. Note that my knowledge of Google Cloud & Kubernetes is limited to a week of research, so I’m open to suggestions to improve it.

This is aimed at people that just begin exploring Google cloud / Kubernetes, and an attempt to consolidate the week of (sometimes confusing) research

Assumed knowledge (Acquirable in a week! ;))

  • Basic concepts of how kubernetes works (pods, deployments, services)
  • Basic concepts of google cloud

What will be deployed at the end:

  • A single node kubernetes cluster, with no scaling enabled.
  • A docker image containing the app (Nginx/Angular2)
  • A docker image containing the api (with connection details for a mongodb) (NodeJS)
  • A docker image with nginx to reverse proxy google storage
  • Nginx Ingress that routes to the docker backends
  • Certificate that gets validated by Let’s Encrypt & used by ingress

Not included:

  • The actual docker images, this guide is more as a guideline on how to setup this ‘architecture’.

Setting up GCloud CLI (Skip if available)

See the appendix for how to setup Google Cloud CLI. From this point it’s assumed that you have a working gcloud, logged in & pointing at your project/compute zone

Creating the cluster

gcloud container clusters create <cluster_name> --num-nodes=1

This step will create a cluster for you with one node (one compute engine machine), on which all the pods/services will run.

gcloud container clusters get-credentials <cluster_name>

This command configures kubectl, which is the cli tool to interact with kubernetes, to work against the cluster that we’ve just created

Creating a namespace (optional)

For our purpose, namespaces was intended to separate environments from each other. So we would have a beta/QA environment running on the same cluster/node for example. This would isolate pods and services from each other.

kubectl create namespace <name>

A cluster normally has two namespaces when just creating, default and kube-system, if not specifying the namespace in commands, your resources will end up in the default namespace. kube-system is reserved for kubernetes services.

Setting up Helm, installing Tiller

Helm can best be described as a package manager for kubernetes, you will install nginx-ingress and certmanager with this tool. It runs as a client/server architecture, with helm being the cli tool you use, and tiller the server (which we will install later on the cluster).

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade

Here we’re not sure if the patch is required. But it’ll create a service account in the kube-system namespace, create the necessary rolebindings, and init helm with this account.

Please refer to https://cloud.google.com/solutions/continuous-integration-helm-concourse for more details about these steps.

Deploying the app docker

Prior to this step we have already the images available in the registry. For our app this is a nginx that exposes port 80 & serves the angular 2 build.

We have a script that is run before starting nginx, that replaces the api & static host variables present in the source code

echo "Configuring environment variable"
sed -i -e 's#SET_API_HOST_VARIABLE#'"$API_HOST"'#g' /www/main.*.js
sed -i -e 's#SET_STATIC_HOST#'"$STATIC_HOST"'#g' /www/main.*.js

This ensures that we have a generic docker image build, but we can still tweak the hosts with environment variables after deployment.

This is also the first step we do in kubernetes, creating a ConfigMap for our app.

kubectl create configmap AppNameConfig --from-literal API_HOST=http://api-host --from-literal STATIC_HOST=$STATIC_HOST

This configmap is then used in our app kubernetes manifest file, to specify the environment variables to inject, and from where it needs to get the values, in this case, the map.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
name:
AppName
spec:
replicas:
1
selector:
matchLabels:
app:
AppName
strategy:
rollingUpdate:
maxSurge:
1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app:
AppName
spec:
containers:
- env:
- name: API_HOST
valueFrom:
configMapKeyRef:
key:
API_HOST
name: AppNameConfig
- name: STATIC_HOST
valueFrom:
configMapKeyRef:
key:
STATIC_HOST
name: AppNameConfig
name: AppName
image: AppDockerImageName
ports:
- containerPort: 80

This manifest is applied with:

kubectl apply -f AppManifest.yml

This will create the deployment on kubernetes, which will spin up the pod holding the container running your image.

At this point, this deployment is not accessible from outside. For our load balancer to work, we need to expose this as a cluster ip service. Which just ensures that this deployment has a cluster ip.

kubectl expose deployment AppName

Deploying the API docker

The process for this is exactly the same as the app docker. Except that we need a mongo connection. This is best stored in a kubernetes secret, so that they are not as visible as putting them in a configmap.

Create a secret with:

kubectl create secret generic ApiMongoSecret --from-literal MONGO_HOST="<host>" --from-literal MONGO_REPLICA_SET="<replica>" --from-literal MONGO_GET_VARIABLES="<variables>"

Create the configmap as before:

kubectl create configmap ApiConfigMap --from-literal API_HOST=http://apihost --from-literal STATIC_HOST=http://statichost --from-literal APP_HOST=http://apphost

The manifest file is slightly different, as it now needs to also inject environment variables, but fill them with the contents of the secret. (I’ve only included one example of each).

apiVersion: apps/v1beta2
kind: Deployment
metadata:
name:
ApiName
spec:
replicas:
1
selector:
matchLabels:
app:
ApiName
strategy:
rollingUpdate:
maxSurge:
1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app:
ApiName
spec:
containers:
- env:
- name: API_HOST
valueFrom:
configMapKeyRef:
key:
API_HOST
name: ApiConfigMap
- name: MONGO_HOST
valueFrom:
secretKeyRef:
key:
MONGO_HOST
name: ApiMongoSecret
name: ApiName
image: ApiDockerImageName
livenessProbe:
failureThreshold:
3
httpGet:
path:
/customendpoint
port: 6000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 2
readinessProbe:
failureThreshold:
3
httpGet:
path:
/customendpoint
port: 6000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 2
ports:
- containerPort: 6000

Note that this manifest also has a livenessProbe and readinessProbe configuration. By default, ingress will query ‘/’ of a backend and expect a 200 result to know whether the instance is “OK”. Since in our case we don’t have anything serving from / for the api, we override this check here.

As far as I understood, this will also be the endpoints that the deployments will use to know whether a pod is in an error state, so that it can stop & restart them.

And as before, apply the manifest file & expose it as a cluster ip service:

kubectl apply -f ApiManifest.yml
kubectl expose deployment ApiName

Creating the Google Storage reverse proxy

Again, the steps to deploy this proxy are the same. This probably could also be configured with an ingress, but we had no time to investigate this. Our docker is an nginx with the following config:

user  nginx;
worker_processes 2;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

load_module /usr/lib/nginx/modules/ngx_http_perl_module.so;

env GS_BUCKET;
env INDEX;

events {
worker_connections 10240;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;

keepalive_timeout 65;

resolver 8.8.8.8 valid=300s ipv6=off;
resolver_timeout 10s;

upstream gs {
server storage.googleapis.com:443;
keepalive 128;
}

perl_set $bucket_name 'sub { return $ENV{"GS_BUCKET"}; }';
perl_set $index_name 'sub { return $ENV{"INDEX"} || "index.html"; }';

gzip on;
gzip_static on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.0;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon application/octet-stream;

server_tokens off;

server {
if ( $request_method !~ "GET|HEAD" ) {
return 405;
}


location ~ /(.*) {
set $query $1;
proxy_set_header Host storage.googleapis.com;
proxy_pass https://gs/$bucket_name/$query;
proxy_http_version 1.1;
proxy_set_header Connection "";

proxy_intercept_errors on;
proxy_hide_header alt-svc;
proxy_hide_header X-GUploader-UploadID;
proxy_hide_header alternate-protocol;
proxy_hide_header x-goog-hash;
proxy_hide_header x-goog-generation;
proxy_hide_header x-goog-metageneration;
proxy_hide_header x-goog-stored-content-encoding;
proxy_hide_header x-goog-stored-content-length;
proxy_hide_header x-goog-storage-class;
proxy_hide_header x-xss-protection;
proxy_hide_header accept-ranges;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
}
}

The environment variables here are needed to specify to which bucket that we will proxy. Again, this keep the image generic.

We create the configmap:

kubectl create configmap StorageProxyConfigMap --from-literal GS_BUCKET=<NameOfTheStorageBucket>

The manifest file that will kick off the deployment:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
name:
StorageProxyName
spec:
replicas:
1
selector:
matchLabels:
app:
StorageProxyName
strategy:
rollingUpdate:
maxSurge:
1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app:
StorageProxyName
spec:
containers:
- env:
- name: GS_BUCKET
valueFrom:
configMapKeyRef:
key:
GS_BUCKET
name: StorageProxyConfigMap
name: StorageProxyName
image: StorageProxyImageName
ports:
- containerPort: 80

And tell kubernetes what to do:

kubectl apply -f StorageProxyManifest.yml
kubectl expose deployment StorageProxyName

Installing Nginx Ingress

At this point we have the three images running in their container/pod/deployments, and they’re all exposed as a service, but they’re not yet accessible from the outside. For this we will install an ingress load balancer.

Ingress runs also in two parts, by installing it you setup the ‘nginx-ingress-controller’ deployment, which runs the actual nginx.

Afterwards, you can create Ingress resources, which are picked up by the controller and configure the nginx.

(By default, gcloud provides their own controller for this, but we opted for the nginx, since we could match our existing nginx config more closely.)

helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
kubectl apply -f /tmp/manifests-generated/ingress-resource.yml

(RBAC option is for role based access control, something we have not explored further).

This will create two controllers, the nginx-ingress-controller, and the nginx-ingress-default-backend, to which all the non-matched urls will route.

To configure ingress, we create the following resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect:
"true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
name: IngressName
spec:
rules:
- host: sub.host.com
http:
paths:
- backend:
serviceName:
AppServiceName
servicePort: 80
path: /
- backend:
serviceName:
ApiServiceName
servicePort: 6000
path: /api/
- backend:
serviceName:
StorageProxyServiceName
servicePort: 80
path: /static/
tls:
- hosts:
- sub.host.com
secretName: NameOfCertificateSecret

This configuration will direct / to the app, requests to /api/ to the api, and /static/ to the storage proxy. The rewrite target annotation will ensure that /api/ & /static/ are not sent to their backends, so for the call /api/todos/list, the api backend will receive it as /todos/list.

Note that we also configure SSL here by specifying from which secret it has to retrieve the certificate details. This is something we configure when setting up cert manager. Basically, the ingress expects a validated certificate from this secretname, otherwise you will have a fake self signed one.

Also note that while this ingress has now an external IP, this IP is not static, to do this, it has to be promoted to a static IP. This can be done through the google cloud console (VPC Networks -> external IP -> changing ephemeral to static, or via console:

IP_ADDRESS=$(kubectl describe service nginx-ingress-controller --namespace=$NAMESPACE | grep 'LoadBalancer Ingress' | rev | cut -d: -f1 | rev | xargs)

gcloud compute addresses create NameOfStaticIp --addresses $IP_ADDRESS --region europe-west1

This command will lookup the IP adress for the ingress load balancer (note, works for us since we only have one, mileage may vary :-))

Then we promote it to a static IP.

Installing cert-manager and the certificates

So far we have the loadbalancer who has an external IP & routes the requests to the different backends, at this moment, you can point the DNS at your IP.

Install cert-manager with helm:

helm install stable/cert-manager

Create an issuer (Instance that provides certificates, in our case, Let’s Encrypt):

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name:
NameForIssuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: "yourmail@domain.com"
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name:
IssuerPrivateKeyName
# Enable the HTTP-01 challenge provider
http01: {}

Create a certificate:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name:
CertificateName
spec:
secretName:
NameOfCertificateSecret
commonName: sub.host.com
dnsNames:
- sub.host.com
issuerRef:
name:
NameForIssuer
kind: Issuer
acme:
config:
- http01:
ingressClass:
nginx
domains:
- sub.host.com

Finally, apply them:

kubectl apply -f issuer.yml
kubectl apply -f certificate.yml

Once you apply these resources, cert-manager will pick up the fact that the certificate needs validation. It will create a new ingress that hosts the ACME challenge, it will also start a pod that executes the requests to Let’s Encrypt.

Once the validation is complete, it will store the details in the secret (which we also used earlier for the ingress).

There might be some delay in the validation, but normally, after a few minutes, your should be able to visit your host and see that it is secured!

Note: Found some confusing information regarding setting up cert-manager. I think it can be setup in two ways. The one described above, where you manually create issuer/certificate and let cert-manager do its thing.

The other would be where you configure ingress with annotations, so that cert-manager picks that up. In our config, no specific annotations are needed for this.

Appendix

Setting up GCloud CLI

There are two options here, the Google Cloud shell, or from your favorite local terminal. Alternatively you can wrap the gcloud command with docker:

FROM google/cloud-sdk:latest
RUN curl -o get_helm.sh https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get
RUN chmod +x get_helm.sh
RUN ./get_helm.sh

Helm can best be described as a package manager for kubernetes, you will install nginx-ingress and certmanager with this tool. It runs as a client/server architecture, with helm being the cli tool you use, and tiller the server (which we will install later on the cluster).

If you create a container from this docker, and run that container, login sessions should be persisted. The volume in this case is a scripts folder that is shared between host/docker so we can add files/scripts to run on the cloud.

docker create -v $(pwd)/scripts:/scripts -w="/scripts" \
--name=<container-name> -it gcloud-platform:latest /bin/bash
docker start -ia <container-name>

After having installed gcloud in the flavour you prefer, you can login with:

gcloud auth login --brief

It should show you a link to a token that you have to copy in the terminal. After login, set your project ID & compute zones with:
gcloud config set project <PROJECT_ID>

gcloud config set compute/zone <ZONE>