Using Nginx-Ingress as a Static Cache for Assets Inside Kubernetes

Optimizing Nginx on Kubernetes Without a Adding a Cloud CDN.

This avoids continuous fetching of static HTTP resources from backend pods. Instead, the Nginx server directly returns them, as if it were hosting static web sites!

For GKE Users: What About Google’s Cloud CDN?

Preparation

Configuring the Web Application

  • Cache-Control: public, max-age=… tells the upstream proxy it may cache this URL, and how long.
  • Expires also works instead of max-age, but this is the legacy header.
  • Content-Length is typically required by various caches.

Understanding the Nginx Configuration

http {
...
# Declare a cache named static-cache
proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=static-cache:2m max_size=100m inactive=7d use_temp_path=off;
proxy_cache_key $scheme$proxy_host$request_uri;
proxy_cache_lock on;
proxy_cache_use_stale updating;
server {
listen 80;
server_name example.com;
...
location / { proxy_buffering on;
proxy_cache
static-cache;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
...
proxy_pass http://example-backend.internal;
}
}
}
Photo by Andres Abogabir on Unsplash

Configuring Nginx-Ingress Inside Kubernetes

controller:
config:
http-snippet:
|
proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=static-cache:2m max_size=100m inactive=7d use_temp_path=off;
proxy_cache_key $scheme$proxy_host$request_uri;
proxy_cache_lock on;
proxy_cache_use_stale updating;
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mywebsite
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffering: "on" # Important!
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;

Configuring a Sub Path for Caching

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
tls:
- secretName: mysite-ssl
hosts:
- mysite.example.com
rules:
- host: mysite.example.com
http:
paths:
- path: /
backend:
serviceName: mysite
servicePort: http
---
# Leverage nginx-ingress cache for /static/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite-static
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: mysite.example.com
http:
paths:
- path: /static/
backend:
serviceName: mysite
servicePort: http
Photo by Sean O. on Unsplash

Testing That It Works

curl --head https://mysite.example.com/static/logo.png

Debugging nginx.conf

KUBE_NAMESPACE=infra
NGINX_POD_NAME=$(kubectl get pods -n $KUBE_NAMESPACE --selector=app=nginx-ingress,component=controller -o name | cut -f1 -d' ')
kubectl exec -n $KUBE_NAMESPACE $NGINX_POD_NAME -- cat /etc/nginx/nginx.conf | less
server {
server_name mysite.example.org
...
location /static/ {
# Contents of our mysite-static ingress
# All our proxy settings:
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale ...
# ...
proxy_pass http://upstream_balancer;
}
location / {
# Contents of the top-level ingress
# ...
proxy_pass http://upstream_balancer;
}
}
kubectl logs -f --since=5m -n $KUBE_NAMESPACE $NGINX_POD_NAMEkubectl describe ingress mysite-static

Clearing Cached Resource

  • Generate unique URL names per release (e.g. an md5 hash). This avoids the whole problem, as each new release changes the URL of the resource.
  • Move the Nginx cache to memcache or redis, which is easier to purge.
  • Restart the Nginx-Ingress container to flush the cache.
  • Using the commercial Nginx Plus offers better ways to purge the cache, e.g. by requesting the URL with a HTTP PURGE method. The community edition only supports the proxy_bypass_cache setting.

Final Words

  • ✅ Cheap performance wins.
  • ✅ Reduces load on backend pods.
  • ✅ No reliance on external services (like memcache).
  • ✅ Ideal for small clusters and low-traffic sites.
  • ❌ Running many Nginx replicas spreads the cache.
  • ❌ Restarting Nginx pods clears the cache.
  • ❌️ High traffic sites are better served with a full CDN.

--

--

What happened to “full stack” development? Django, Containers, Devops and Linux

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Diederik van der Boor

What happened to “full stack” development? Django, Containers, Devops and Linux