Our Headless WordPress Journey part I: speeding up the REST API

Lodewijk van der Meer
6 min readFeb 15, 2019

At my company our team, focussed on designing and building websites for our clients, set itself a challenge last year: to try and make the most out of WordPress as a headless CMS serving its content over a REST API.

Ever since that day we have had an interesting journey with ups and downs. We have learned a lot and feel that some of those lessons are worth sharing. And we are not only sharing the story. Along the way we also developed a set of tools to improve the WordPress REST API. We have open sourced the first of those tools through the public WordPress Plugin Directory to share our code with the community as well, and plan to release more in the near future.

Photo by Scott Webb from Pexels

(A short disclaimer: I’m not going into whether you should want to use WordPress as a headless CMS or general pros and cons of WordPress. The web is full of articles on that. If you’re into that, google away and knock yourself out. We have our reasons for using WordPress as one of our tools as do some of our clients. Let’s take that for a fact for now.)

Our setup

To begin with, the setup that we use when developing our projects is as follows:

  • We host a WordPress CMS on its own domain and infrastructure, using it only for the CMS Admin interface and the REST API (and sometimes serving uploads and assets when there is no separate CDN).
  • We deploy the website frontend as a Vue/Nuxt application to Netlify with pre-rendering support.

One of the first things we ran into when using this setup is the (lack of) speed of the WordPress REST API. Calls to the API take some time to finish and in a headless setup all data needs to go through there. As we all know getting WordPress to perform fast can be an art in itself. We always deal with WordPress installations that need a high performance, but our usual recipe for that does not work when it comes to speeding up the REST API.

Generating a site as part of the deployment to Netlify is an interesting use case to test the performance of the API. All routes available in the website or application are crawled so that these routes can be pre-rendered in Netlify. This means that the API has to serve all the generic calls for the site, plus all the calls needed for all routes, basically ‘at once’. Our first attempts led to the REST API crashing our test server: it looked like a kind of (non-distributed)DOS attack. Something needed to be changed and, although ‘allocating more RAM and CPU’ would probably also have worked, we prefer more elegant strategies.

Need for speed

First thing we looked into is how we could reduce the amount of requests. We tried bundling responses and making more smart single responses containing combinations of multiple types of data. This was fine for some situations, but we soon found ourselves developing all kinds of custom endpoints in the API for each specific combination of data a frontend might need and to us that is not what an API should be doing.

Throttling the requests during site generation helped to keep the server alive but caused deployments to take way too long and again didn’t feel like an effective strategy. We also needed to speed up the REST API with proper caching as a first step.

To our surprise we found existing plugins for this to be outdated and/or inefficient. This was when the team started to lay the foundation of what is now the WP REST Cache plugin. A plugin that caches responses from the REST API and speeds it up drastically.

The hard part

Of course caching the REST API starts out with creating a framework for caching responses for all default WordPress endpoints for post types and taxonomies. We chose to use the WordPress Transients API. This is the part that the existing plugins covered as well. Because all of our WordPress projects have a high level of customization, just sticking to the basics isn’t enough for us. That is why we ended up including some more tricky features as well.

Registering custom endpoints

We often work with custom endpoints. The plugin offers a method of easily registering (and unregistering) your custom endpoints for caching.

/**
* Register the /wp-json/acf/v3/posts endpoint so it will be cached.
*/
function wprc_add_acf_posts_endpoint( $allowed_endpoints ) {
if ( ! isset( $allowed_endpoints[ 'acf/v3' ] ) ||
! in_array( 'posts', $allowed_endpoints[ 'acf/v3' ] ) ) {
$allowed_endpoints[ 'acf/v3' ][] = 'posts';
}
return $allowed_endpoints;
}
add_filter( 'wp_rest_cache/allowed_endpoints', 'wprc_add_acf_posts_endpoint', 10, 1);

Targeted flushing of caches on updating related or nested items

Our projects almost always deal with custom post types that have multiple relations with other items and taxonomies. For the caching to be efficient, the plugin needs to be able to, upon updating an item, not only flush the cache for that item. It should also be able to flush the cache for all other items that that item is part of or related to. And that is exactly what it does now, auto-magically.

The devil is in the detail

Of course we couldn’t leave out some smaller but important features for the plugin to be ready to ship/share:

Cache timeout settings and cached items overview including hit counter and flushing options (all or selection)
Cached item details and contents

Benchmarking

With all those things covered in the plugin we have released now, we see significant improvements in performance.

When doing 1,000 requests with no concurrent requests, we see 2–3 times faster response times and all requests being finished 4 times faster than without caching.

With 10 concurrent requests, we see 99% of calls finishing in under 300ms when caching is enabled compared to under 1% with no caching.

And with an even higher load (100 concurrent requests, the same amount of requests in total), the response time for an individual request is even 4–5 times faster.

These tests were done with Ab on a clean WordPress installation on a local machine. All requests were made to the posts endpoint (/wp-json/wp/v2/posts). The Y-axis shows time needed per request, the X-axis shows time needed to finish all requests. You can imagine that the performance improvement would be even more significant in cases where more complex custom endpoints are cached.

Stay tuned

All credits for this work and the plugin go to our dev team at Acato in Utrecht: Merel, Ramon, Remo and in particular Richard and Yoeri who directly contributed to the plugin.

While they continue to build high performance Vue websites for our clients, we also keep improving our implementation of WordPress as a headless CMS API for our websites. If you’re interested in these topics feel free to follow us on Instagram, Facebook, LinkedIn or get in touch. In future articles we will share other lessons learned along the way as well, such as:

  • Using Yoast SEO in your Headless WordPress setup
  • Using Gravity Forms in your Headless WordPress setup
  • Setting high-scores in Google Lighthouse with your Vue/Nuxt application

We look forward to hearing your thoughts on the plugin and strategies for using WordPress as a Headless CMS as well of course!

--

--

Lodewijk van der Meer

Aiming to make a change while having a good time at my companies Acato (digital agency), ON socks (5 socks in a box) and Techtical (digital transformation).