The world’s longest WordPress deployment

Image for post
Image for post

WordPress is famous for its 5-minute install. Turns out it takes a little longer when deploying with kubernetes. So why did put ourselves through this?


Since it’s 2018 and we all hate reading (and this isn’t an audio blog), the choose-your-own-adventure outline is below so you can jump to where you want to read.

  1. The Story Behind the Problem
  2. The Solution to the Problem
  3. The Kubernetes Configuration

The Story Behind the Problem

We’re all about efficiency and empowering everyone on any team to complete their work with minimal bottlenecks. We put our marketing site together using WordPress so that our marketing team could make changes to copy and pages without having to request dev support.

Our core application is a completely agnostic front-end build using React and Redux, so we can hijack WordPress’ routing engine to forward all app-specific URLs to a single page that loads the application (woo, no subdomains). It’s also supported by a fair chunk of microservices, seriously.. we have like 20 repo’s running the logo maker alone (post about that coming soon):

Image for post
Image for post

As a result, we ran our WordPress build on its own instance, all by its lonesome. This setup was fine since it mostly worked as a hand-off to the application, where the real server loads existed.

However, a unique experience forced us to rethink our lax approach to WordPress. One day we awoke at 5 AM to messages about our site being down. But the site wasn’t down, and traffic wasn’t higher than usual, so why weren’t pages loading?

Well, something was different: we had an unusually high number of visitors from China, and these users displaced our usual North America traffic distribution. In a nutshell, here was the problem.

The Solution to the Problem

We got super popular on Weibo, and everyone and their best friend behind the great firewall was trying to get onto our site to make a logo.

We use caching, but we don’t cache the page that loads the React app, so that we can bump the version for continuous deployment. For this reason, a request to that page would require the server to do some work in fetching the page.

Each page request needs a server worker to fetch the page (even though the application’s JavaScript comes from our CDN). As we upped the number of apache workers, our website still didn’t load faster, and the number of users on the site kept climbing.

We eventually reached over 20,000 workers on a 16GB 4xCore instance, and still more and more users…but the website was still inconsistently loading. The problem here was obvious: on one side of the internet, there was an unknown number of users from China trying to get onto the site. They were requesting all the available resources we kept adding, and consuming them before users from our core markets could.

We could up the workers all day long and watch the users continue to climb with absolutely no hope in knowing where the end was. So we solved it by blocking all IPs coming from China to stop them from even reaching the server. What a horrible solution. But we had to do it — we had no load balancer in place, a poor caching strategy for WordPress, and no auto scaling.

This led to some new thoughts on a solution:

1) Improve our caching and leverage CloudFront better. Admittedly, we were pretty lazy setting it up for the Wordpress page because our app resources were a bigger request problem. It’s also expensive compared to using some of the really great WordPress caching tools, which are free.

2) Have an auto scaling strategy in place for our marketing site. Why? Because WordPress caching and auto scaling is cheaper than CloudFront, and we don’t want to have to ask CloudFront to clear a cache for every single continuous deployment we do.

3) Manage all of our microservices and servers from one location.

The Kubernetes Configuration

Here’s how we planned on putting a solution in place:

  1. Create a docker image for the WordPress deployment and MySQL database.
  2. Create the kubernetes deployment and service for WordPress, as well as a persistent volume for the database.
  3. Make sure we have SSL and set up autoscaling for a dedicated node using affinity and anti-affinity rules.

Our WordPress deployment is the easiest thing ever. Since day one, we’ve been using docker’s WordPress image, so all we had to do was build a custom theme for our site.

Below are some images of our docker files and screenshots to affirm my convictions:

Image for post
Image for post

You’ll notice we have two docker files: one for the WordPress service and the other for the database. Before this, we only had the one docker file, and we would just load up an empty MySQL image for the database.

To prepare for the persistent volume, we made our own image based on the mysql:5.7 image. Then we updated the service to use this image instead. The LJ folder contains our theme. How easy is this?

Step two was to get these images onto AWS’s Elastic Container Repository. A little Makefile helps nicely with this flow.

Image for post
Image for post

Step three involved configuring kubernetes. I’ll spare the long-winded explanations and just show you the configuration files. Overall there are 7 configuration files:

- namespace config
- 1 persistent volume config
- 2 deployment configs (MySQL + WordPress)
- 2 service configs (MySQL + WordPress)
- 1 ingress config

So there you go — services + deployment configs for the database and the WordPress build. Sprinkle in an Ingress with SSL from Let’s Encrypt, and you’re done (if you want to learn about nginx-ingress + cert manager, check out this post).

You can add horizontal autoscaling through a YML file or use the kubectl CLI to do it. Either way, you can read more about it here.

Looka Engineering

Creative engineers, data scientists, and designers making…

Looka (formerly Logojoy)

Written by

Looka’s AI-powered design platform lets you create a logo, make a website, and build a brand you love.

Looka Engineering

Creative engineers, data scientists, and designers making great design accessible and delightful for everyone.

Looka (formerly Logojoy)

Written by

Looka’s AI-powered design platform lets you create a logo, make a website, and build a brand you love.

Looka Engineering

Creative engineers, data scientists, and designers making great design accessible and delightful for everyone.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store