Caddy on Kubernetes

One of the ways to expose your apps in Kubernetes to the outside world is to use an Ingress resource. Today, I feel it lacks in features. I couldn’t find a straight-forward way to have multiple websites, each with their own TLS certificate, to work with the Nginx Ingress.

A knowledgable contact, Alistair A. Israel, mentioned trying out Caddy. Implementing it was a breath of fresh air. The configuration is easy to understand, with terms and documentation that are more easily understood by humans.

How It Looks

To get this to work, you need to create a LoadBalancer service that points to your Caddy deployment. The Caddy deployment’s config file has the list of websites and what services they should redirect to.

Tips

  1. Use NFS for the certificates.
    If you will make Caddy do the provisioning of certificates, it will have to write the certificates to the disk. Normally, a persistent volume would do, but on high-availability clusters with nodes spread across multiple availability zones, you must use a volume that is available across zones. This way, your Caddy pod can be scheduled to any zone and they’ll be able to access the certificates. If you’re on AWS, use EFS. Not doing this properly will make Caddy request new certificates, which will make you hit your rate limits easily.
  2. Practice. Try Caddy locally to get a feeling. Use the staging environment of LetsEncrypt or else you’ll run the risk of having to wait a week before you can try again.

Pros

  • easy to configure
  • one load balancer for all of your websites
  • automatic provisioning and renewal of LetsEncrypt certificates
  • helpful community

Cons

All in all, I think Caddy is a fantastic server and would like to see features that will allow integration with Kubernetes in the future.

Show your support

Clapping shows how much you appreciated Ramon Tayag’s story.