How to Host WordPress Like a Boss

Scale WordPress on AWS with Docker

Tom Maiaroto
Serif & Semaphore
Published in
7 min readAug 24, 2016

--

If your primary business is not to build a CMS, then WordPress becomes a very attractive option. I also have to thank it because it was one of the first open-source projects that helped me learn PHP.

Yes, I’ve moved on to PHP frameworks and understand why WordPress doesn’t scale well…But there’s also a reason why companies like Pantheon exist and why WordPress is still popular and still around to this day.

All fairness aside, I’ve long maintained that WordPress doesn’t scale. Trust me, having built sites like UsMagazine.com (which at first did use WordPress) I’ve seen WordPress and Drupal fail — hard. When you work on sites with that kind of traffic, you’ll quickly realize this.

So it’s been 8? 10? I’ve lost count. Let’s just say it’s been many years since I’ve used WordPress but I recently came back to it.

Though this time around I wanted to see if I could find a scalable solution using the latest web hosting technology. And through my previous research and experimentation with AWS and microservices, I have.

Want host WordPress like a boss? Ok, the easy answer is Pantheon. However, if that’s not an option or you want more control over the situation, I’ve come up with a pretty awesome solution. Ready for the winning formula?

Docker + Nginx + PHP-FPM + Amazon ECS + EC2 (Auto Scaling) + ELB + EFS + RDS (MariaDB) + CloudWatch = Winning

If that makes sense, you can stop reading. It’s ok if you keep reading though because I just threw a metric shit ton of acronyms at you. No one should be expected to understand that, so I’ll gladly break it down.

The scalable and resilient architecture in a nutshell.

Let’s start with the easy one — Docker. Docker is taking the hosting world by storm. It’s rapidly transforming the landscape at a pace I’ve never seen before.

Docker let’s us containerize our server and, optionally, application together (some used to use Amazon AMIs to do this in the past — this is cooler). It then can be configured to work with other containers and scale very effectively. There’s many more benefits, but I won’t go on about them here.

Amazon ECS is a service that makes Docker management and deployment a breeze. It lets you configure the containers and keep everything under version control and it also schedules where those containers run.

In this case I’m using Docker purely as a service (Amazon ECS Tasks/Service) that is the “web server.” It runs Nginx and PHP-FPM. That’s it. In this setup, it does not host the codebase. This is atypical from many architectures you’ll see using Docker where the codebase is baked in as to use it as a method of versioning/tagging the application deployments.

Put another way, only an Nginx or PHP config change would prompt a new Docker image and a deploy of a new version. This keeps your Docker image very stable. You’ll almost forget about it.

Why didn’t I put WordPress in the container? Because user uploads. The assets in a WordPress application don’t go off to a shared storage solution like S3 or a MongoDB GridFS. They go straight to where the codebase lives and that’s problem if you want to think of your Docker containers as immutable so they can be easily thrown away and restarted.

The user upload solution? Amazon EFS

Amazon EFS is a relatively new service, this is basically a modern NAS. It uses the NFS protocol, but it’s performant (at least for reads).

I put the entire codebase on EFS and not just the uploads directory. This also makes cloning environments, backups, and deployments much easier.

Docker containers in ECS can mount EFS. So it was trivial to have Nginx configure its webroot to use a directory on EFS. Now each of your Docker web servers can read from the same version of the codebase and access the same file assets.

Scaling the web server

In this case, our web server runs Nginx with PHP-FPM (with PHP7). However, there are other solutions for WordPress you could use here. You could use Apache for example, but I prefer Nginx.

For those unfamiliar, Amazon ECS runs Docker containers on EC2 instances. Those EC2 instances can be part of an auto-scaling group. The ECS Tasks can also run in their own auto-scaling group defined as an ECS Service.

Confused? Here’s what happens: Amazon ECS uses EC2 instances like a resource pool. Your ECS Tasks are defined to require a certain amount of CPU and RAM. ECS’ scheduler runs those containers on the pool of EC2 instances where they fit. You must have enough EC2 instance resources to run a Task/Service.

When using ECS, just think of EC2 as total RAM and CPU available across all subscribed instances and never think about individual servers again. It’s all about resource units — not how many servers you have.

You can assign how much CPU and RAM each container uses as well as a host of other things.

So the ECS Service also has rules for scaling. If so many containers are unhealthy, turn on another one. There’s rules you can set. That’s great, but then you also need to ensure you have enough EC2 instances for that. So those too need to auto-scale.

With a bit of setup, you could end up with a situation where you’ll always have Amazon turn on more servers and run more containers to keep up with demand. Set it and forget it. Save money.

Load balancing

Of course you need to use Amazon Elastic Load Balancer with ECS and EC2. Otherwise you’d be running your site on just one EC2 server despite having a bunch up and running with Docker containers.

This is relatively straight forward to setup and comes with a whole other set of useful options.

The Database

The last piece here is the database. I chose MariaDB instead of MySQL, but in either case you’ll want to leverage Amazon RDS.

Amazon RDS allows you to easily scale and back things up. There do exist plugins for WordPress that let you leverage database replication.

Bonus! CloudWatch

What about centralized logging? Well, ECS Tasks can log out to Amazon CloudWatch conveniently enough. You create a log group and configure it in your ECS Task definition (just an important note: you must have your AWS log group created first before running your Task and they don’t explain that in the error message if you try to run a Task that points to a non-existent log group).

Log groups are also convenient when it comes to multiple environments too. While I have to admit looking through logs in AWS web console is not the prettiest, they do work.

Protip: You can “tail” AWS logs using this tool https://github.com/jorgebastida/awslogs

Just ensure your Nginx (and anything else you want) logs out to stdout and stderr. Amazon will pick that up and throw it into CloudWatch. Pretty cool, huh?

The error_log setting that will allow Nginx logs to be picked up by CloudWatch.

Wiring it all up

I cover the major components of the architecture, but there’s actually a lot that goes on in terms of configuration and wiring it all up.

I’d like to adjust the Dockerfile I’m using to be a bit more generic and then also setup some Terraform files or maybe CloudFormation. Until then, I’ll skip on diving too deep on the Dockerfile or the configuration scripts. If you’re familiar with Docker and deploying code, I’m sure you’ll be able to come up with something similar and very likely better for your needs.

One of the most important things to note is to leverage environment variables because you can then address them in your ECS Task as well as override them when you launch each Task. Things like your webroot path for example or your database connection information.

By using environment variables and a setup script, you can really ensure that you’re only building one Docker image for your staging and production environments.

Deployment strategy & other concerns

Once again I turn to ECS and Docker for deployment. I created a separate Docker image that is responsible for grabbing the latest codebase and getting it onto EFS.

This deployment Task can be as simple or as complex as you need. It could create entire backups simply by copying directories so you can rollback if you need to. Again, keep in mind that WordPress will put uploads within your codebase.

I found deployments pretty manageable with a simple shell script. Literally all you need to do is get the latest files on EFS.

The great thing about separating the codebase from the Docker image is that Nginx and PHP config updates can be deployed separately from the codebase. Even if you like to keep everything in the same GitHub repo.

Aside from getting your site up to date, you’ll occasionally need to update your web server. This is as simple as pushing a new image to the ECS registry and updating your ECS Task (unless you like to live on the edge and use the latest tag — then you just kill your ECS Tasks and the next ones to automatically online are updated). You could also be using something like Terraform.io to be managing this from your console or an automated process somewhere.

You could create additional ECS Tasks (and Docker images) for other needs. For example, you could have an ECS Task that clones the current production filesystem and database and sets up a whole new environment for development. Keep in mind you can also use the AWS SDK to run ECS Tasks from other ECS Tasks. Yes, you could basically replicate the features you see in a service like Pantheon.

That’s all for now, hope this has given you some ideas to go on for your next WordPress project.

--

--