Practices for Organizing Staging-Level Kubernetes Applications with Namespaces

Lexa
ASL19 Developers
Published in
7 min readMay 16, 2019

At any given point in time, there is at least a handful of active web projects at ASL19. Take a look at our website to see a few of our ongoing projects.

A few of ASL19’s active projects, May 2019

Almost all of our projects are web-based and in need of hosting both in staging and production levels. Regardless of the stack used for development, we currently use remote servers (eg AWS EC2) and create the domain records on Route53.

Recently, we have begun to migrate a few staging-level projects onto Kubernetes for easier management and efficiency of resource use.

  1. Rouhani Meter:

Rouhani Meter aims to monitor the accomplishment of promises made by the Iranian President Hassan Rouhani. It is inspired by Morsi Meter which was created by a group of young Egyptians to observe the actions taken by the Egyptian president at the time, Mohamed Morsi, with regards to his promises.

Rouhani Meter is a Wordpress application with a mysql database. The media content of this application comes from AWS S3 through CloudFront and the Docker image of the Wordpress application. An Nginx container is also attached for media links and redirecting purposes.

2. Fact Nameh:

At ASL19, we began our fact-checking project Fact Nameh as part of a broader initiative to promote greater accountability and transparency in the Iranian political arena. Unlike most fact-checking organizations, we face the unique challenge of being based outside their country of focus, which impacts the approach we take to collecting and verifying the information and reaching our target audience, among other factors.

Fact Nameh, much like Rouhani Meter is a Wordpress application with a mysql database and an Nginx container. The media content for this project mainly gets uploaded manually onto the attached volumes.

3. Boogh:

This is our of our new projects, still in staging. Boogh backend is a Java application with a postgresql database and a Sonar container for monitoring.

4. IPA:

Iran Prison Atlas, an ongoing project of United for Iran, is the most comprehensive database of current Iranian political prisoners, the judges that convict them, and prisons that hold them.

Iran Prison Atlas’ interactive tools allow users to identify patterns of abuse by Iranian judges, mistreatment at various detention centers, and conviction trends based on race, religion, gender, activity, and other classifications. Over 900 current political prisoners, 200 judges, and 150 prisons are profiled.

Structure

All staging domains used above have the same base domain (e.g. <project name>.staging.com). In order to save resource and processing time, we define a centralized namespace structure in order to avoid duplicating resources. This structure has the following characteristics:

  • One TLS certificate is created which secures all subdomains of the base domain (wildcard)
  • All non-assigned subdomains redirect the user to the Nginx Default Backend page
  • There will only be one instance of each database server, in the shared namespace (default)
  • There will only be one instance of the add-on tools (cert-manager, external-dns, nginx-ingress, nfs-provisioner)
  • There will only be one instance of the resources required for the TLS certificate (ingress with tls-acme annotation, clusterIssuer, Certificate)
  • Shared secrets such as TLS, registry credential, and AWS credentials are created in the shared (default) namespace and copied to the other namespaces using CLI or CI/CD.

Putting the above constraints together, the following diagram is the finalized structure for our staging projects that we designed for Kubernetes.

ASL19 Staging on Kubernetes

How to apply this structure to your projects

This structure is not specific to our projects and could be employed in any cluster that holds multiple projects with a lot of shared resources. A few reasons why we decided to migrate to this structure (and you should too) are:

Cert Manager and External DNS:

These two tools continuously check for updated ingress resources. Based on experience we know that is is very possible that a small mistake triggers these services to create a DNS record or resource repeatedly for hours on end before you catch the error. Having multiple instances of these resources might result in maxing out on your Let’s Encrypt quotas, wasted computation time, and tons of irrelevant lines filling up your pod logs. Reducing triggers to create new records (ingress) and new resources (Certificate and Cluster Issuer) to as few as possible will help to reduce this risk.

TLS:

Using one wildcard certificate suffices for multiple projects under the same base domain. For instance, it is quite wasteful to create a separate certificate for the subdomain x.example.com and another for y.example.com while both can be secured using a wildcard *.example.com. Generalize when you can.

Databases:

Having your database server pods in a shared namespace makes your structure more manageable and cleaner. It will also make it easier to find your data in your CI/CD pipeline scripts later. You will always know where to look for your data.

Development

Deploying this structure is very simple — simpler than having scattered resources in your project namespaces. There are a few steps you should take:

  1. Install your add-on tools inside your shared namespace (eg default)

We will be using thedefault namespace as our shared namespace. We can install our tools like so:

$ helm install --name external-dns  \
--namespace ${SHARED_NAMESPACE} \
--set aws.accessKey=${AWS_ACCESS_KEY_ID} \
--set aws.secretKey=${AWS_SECRET_ACCESS_KEY} \
--set aws.region=${AWS_REGION} \
--set policy=upsert-only \
--set domainFilters={${DOMAIN}} \
stable/external-dns
$ helm install --name nginx-ingress \
--namespace ${SHARED_NAMESPACE} \
--set controller.publishService.enabled=true \
stable/nginx-ingress
$ helm install --name efs-provisioner \
--namespace ${SHARED_NAMESPACE} \
--set efsProvisioner.efsFileSystemId=${FILESYSTEM_ID} \
--set efsProvisioner.awsRegion=${AWS_REGION} \
stable/efs-provisioner

$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml

$ kubectl label namespace ${SHARED_NAMESPACE} \
certmanager.k8s.io/disable-validation=true
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install --name cert-manager \
--namespace ${NAMESPACE} --version v0.7.1 \
-f .generated/values.yaml \
jetstack/cert-manager

2. Make sure all namespaces have the correct role and secrets

Sometimes you deploy a project onto a new cluster forgetting that you enjoyed having all the namespace and permissions already set on your old cluster or secrets already created. To make sure your deploy scripts don’t fail due to the wrong permissions or missing secrets, run the following to get all of your namespaces ready. Here, we are using the namespace “example”:

# Making sure all namespaces exist
$ kubectl create namespace ${EXAMPLE_NAMESPACE} \
--dry-run -o yaml | kubectl apply -f -
# Creating and copying secrets to all namespaces
$ kubectl create secret generic ${AWS_ACME_SECRET} \
-n ${SHARED_NAMESPACE} \
--from-literal=access-key-id=${AWS_ACCESS_KEY_ID} \
--from-literal=secret-access-key=${AWS_SECRET_ACCESS_KEY} \
--dry-run -o yaml | kubectl apply -f -
$ kubectl get secret ${AWS_ACME_SECRET} \
-n ${SHARED_NAMESPACE} --export -o yaml |\
kubectl apply --namespace=${EXAMPLE_NAMESPACE} -f -
$ kubectl create secret docker-registry ${IMAGE_PULL_SECRET} \
-n ${SHARED_NAMESPACE} \
--docker-server=${REGISTRY} \
--docker-username=${DOCKER_USER} \
--docker-password=${DOCKER_PASSWORD} \
--dry-run -o yaml | kubectl apply -f -
$ kubectl get secret ${IMAGE_PULL_SECRET} \
-n ${SHARED_NAMESPACE} --export -o yaml |\
kubectl apply --namespace=${EXAMPLE_NAMESPACE} -f -
# Creating all clusterrolebindings
$ kubectl create clusterrolebinding ${EXAMPLE_NAMESPACE}-admin \
--clusterrole cluster-admin \
--serviceaccount=${RM_NAMESPACE}:${SHARED_NAMESPACE} \
--dry-run -o yaml | kubectl apply -f -
# Creating database secrets in all namespaces
$ kubectl create secret generic ${MYSQL_DEPLOY} \
--namespace=${EXAMPLE_NAMESPACE} \
--from-literal=mysql-password=${EXAMPLE_DB_PASSWORD} \
--dry-run -o yaml | kubectl apply -f -

3. Create all other shared resources

By other shared resources we mean all Database deployments and resources needed to create the wildcard TLS certificates. In our case, we will be creating the following:

  • Cluster Issuer (using the prod Let’s Encrypt URL)
  • Wildcard Ingress (forwards all unassigned subdomains to the Nginx Ingress default backend)
  • Certificate object (use base domains for DNS names)

First, we can go ahead and create our ClusterIssuer in the shared (default) namespace:

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name:
${ISSUER_NAME}
namespace: ${SHARED_NAMESPACE}
spec:
acme:
server:
${LETSENCRYPT_SERVER_URL}
email: ${EMAIL}
privateKeySecretRef:
name:
${ISSUER_NAME}
dns01:
providers:
- name: route53
route53:
region:
${AWS_REGION}
accessKeyID: ${AWS_ACCESS_KEY_ID}
secretAccessKeySecretRef:
name:
${AWS_ACME_SECRET}
key: secret-access-key

The Ingress is created first although it is dependent on the Certificate resource. This is because the Ingress resource by default creates a Certificate resource that complies with the specs stated in the Ingress manifest, however, we do not want the Certificate we create to be replaced with the created resource. Therefore, we apply the Ingress first and then the Certificate:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name:
${INGRESS_NAME}
namespace: ${SHARED_NAMESPACE}
annotations:
kubernetes.io/ingress.class:
nginx
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: ${ISSUER_NAME}
spec:
tls:
- hosts:
- ${WILDCARD_URL}
- ${WILDCARD_SUB_URL}
secretName: ${CERT_AND_SECRET}
rules:
- host: ${WILDCARD_URL}
http:
paths:
- backend:
serviceName:
${DEFAULT_BACKEND_SVC}
servicePort: 80
path: /
- host: ${WILDCARD_SUB_URL}
http:
paths:
- backend:
serviceName:
${DEFAULT_BACKEND_SVC}
servicePort: 80
path: /

Here we use our wildcard URL (base domain wildcard) and wildcard “sub URL” (subdomain wildcard e.g. *.api.example.com). This ensures that theTLS secret that is created secures all of our URL’s with the same base.

The rules below point all subdomains to the default backend, which is a service automatically created when we installed Nginx Ingress. When forwarded to this service, the user will simply see the message:

default backend - 404

Now we can create the Certificate object. Note that this is not our TLS certificate, but simply a template for it.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name:
${CERT_AND_SECRET}
namespace: ${SHARED_NAMESPACE}
spec:
secretName:
${CERT_AND_SECRET}
issuerRef:
name:
${ISSUER_NAME}
kind: ClusterIssuer
dnsNames:
- ${WILDCARD_URL}
- ${WILDCARD_SUB_URL}
acme:
config:
- dns01:
provider:
route53
domains:
- ${WILDCARD_URL}
- ${WILDCARD_SUB_URL}

Now to make sure the TLS certificate is automatically generated, run kubectl get secrets --namespace ${SHARED_NAMESPACE}:

$ kubectl get secrets --namespace defaultNAME                       TYPE                  DATA      AGE
staging-wildcard kubernetes.io/tls 3 1m

Once created, you will need to copy this secret to your other namespaces. This could be a step in your CI/CD pipelines to make the process less manual.

$ kubectl get secret staging-wildcard -n defult \
--export -o yaml | \
kubectl apply --namespace=default -f -

Now create your RDMS deployments onto your cluster:

$ helm install --name ${MYSQL_DEPLOY} \
--namespace ${SHARED_NAMESPACE} \
--set mysqlRootPassword=${MYSQL_ROOT_PASSWORD} \
--set persistence.size=${MYSQL_PVC_STORAGE} \
stable/mysql
$ kubectl apply -f postgres-pvc.yaml
$ kubectl apply -f postgres-deploy.yaml
$ kubectl apply -f postgres-service.yaml

For obvious reasons, you will have to create each database you will be using in your applications along with their users or roles. Once done, make sure to create the required DB password secrets in the associated namespaces accordingly.

Overall try and think of any duplicates in your installed resources in each of the namespaces as a possible resource to be shifted to the shared namespace.

Enjoy!

--

--