Caching 101: Run webpages b-blazing fast with Redis & Varnish

Our daily job is to build the internet of tomorrow and we do that by working with clients who have the same ambition as we do. In this case we’re working with one of the biggest food suppliers for health care, business and retail. They are a grocery shop, wholesale and food service in one and provide all these services via their web-based platform. They have multiple physical locations which are distribution centers, so all sales go through the online platform.

Their current user base is around 6.400 users (people who actually order). Customers can order their products before 11:00 for same day delivery, this means they have a peak of 150 concurrent users from 08:00 till 12:00 every day. Next to that we have edge cases like Christmas and Easter where the peak is even higher.

The first thing the product owner told us when we were introduced to the project was:

Our users like speed, they want fast loading pages. If they need to order something on our platform and they have to wait, they will shop less.

So what seems to be the issue?

While building the platform more and more features where integrated into the same page. Our back-end codebase is the Python framework Pyramid. Next to that we have a PostgreSQL database. These are all hosted on Amazon Web Services (AWS). Architecturally, the front-end is built in ReactJS (developed by Facebook) and runs on its own server.

Every feature added to a page means more API calls and more loading time. From the start we’ve implemented progressive loading, which means the initial load is minimal and we show placeholder blocks where the actual data will be shown.

Example of loading the platform in Slow 3G setting in Google Chrome

So to give a better understanding of the whole problem we are going to use the product overview page as an example. This page shows (the user) the following information:
- Categories
- Products
- Orders
- Favorites
- Notifications
- Current cart
- All carts

We need to keep in mind that prices of products are based on the current user and date as this isn’t static information. Next to that we have variants of each product, we could order 1 bottle, a 4 pack, or a pallet of soda (grocery and wholesale in one).

Types of data

A lot of the data is recurrent like products and prices. Some data, such as product categories, hardly change over time. We’ve got volatile data like the cart and we’ve got orders which can change but this will not happen that often. Basically we’ve got an unnecessary amount of high load. So how can we eliminate this?


Scale-up: Because of our fancy setup and dedicated back-end server we could just scale-up. Downside is we can’t guarantee a fast loading page and might get expensive in a production environment.

Scaling down design: We could just tell our designers during the sprints that they can’t design a fancy, user-experience focused platform and just focus on showing the least amount of data per screen. Downside is this is not the 
Label A way. All our designs are user centered designs and so leaving out information because of technical limitations is not an option.

Caching: We could use different options of caching so we can serve the data quickly. This is expensive in development because we need more time to build the same functionality but it’s a one-off cost instead of a monthly expense like scaling up.

The best option for now is caching and so we’ve started looking into what kind of caching we wanted to use.

Types of caching

Browser Caching
- Simple
- Fast: no network latency
- No code modifications

- Almost no control over data

HTTP Caching
- Simple
- Config file
- No code modifications

- Does not scale
- Little control over data

Service Caching
- Absolute control over data
- Does scale

- Adds complexity to code base
- Harder to maintain

Service Caching with Dogpile

Dogpile is a powerful library for Python that makes it possible to cache data in Memory, Backends like Redis, File cache or via proxy. Within our project we build views that are served via a service that uses a repository to retrieve data from a database which is based on models.

When a request comes in via the API we can ask the service for the cached version of the data.

Example of a API class

The service will pass this information on to the repository.

Example of a service class

Finally the repository will add a param to the session so the sessions knows to get the data from Dogpile cache and not the database.

Example of a repository class

The query and the results are cached in Redis so when the query is the same, Dogpile will return the cached version. If not it will tell so to the session which will get the data directly from the database. It will automatically store the data and query to the Redis service.

Another big advantage of service caching is that we control the cached data. So whenever we update a certain product we can simply invalidate the query afterwards. (line 15)

Added invalidation to the repo example

Next to that we can create explicit cache policies that will invalidate a specific part of the cached data based on explicit rules for a specific model or repository.

What does this mean for the platform

Right now we can load the whole product overview page within 1.8 seconds, including all javascript and images. Where the Time To First Byte is 64ms, the fastest API call is 186ms(528bytes) and the slowest is 592ms(1.4kb). The response times are roughly 3x faster with this caching implementation.

Google Chrome network tab

But we could do more. Our next steps are to load the images via a Content Delivery Network (CDN). We will write an in-depth article about the use of CDNs soon, so keep an eye on our medium! And while Dogpile is great in caching, the render service of Pyramid still has to convert the data to a JSON format. This is one of the biggest bottlenecks, so still more than enough work to do.

If you liked reading this article, you have an idea on how to solve our JSON rendering bottleneck or you’re interested in working on a complex platform: Leave a comment or take a look at our website, and maybe our next article is written by you!