A Guide to App Performance With React/Rails

Jeremy Gottfried
Jeremy Gottfried’s tech blog
7 min readMay 22, 2018

As a full stack developer, it is difficult to wrap your head around how your backend and frontend interact to affect app performance. When building to scale, it is vital that you understand how to improve the performance of a working app. This article will guide you through some basic concepts and steps you can take to improve your app’s performance.

The Backend Bottleneck

If your app stores data in a backend database, then the client usually queries the database when a user visits the website for the first time.

Database Querying

At its core, Rails JSON API allows your app to retrieve data from a database. The database query is often the first step in data retrieval and also the furthest away from your client. It presents a big performance bottleneck for your app, as a full database query tends to be very slow compared to retrieving cached data. Without indexing, your query is handled like the facial recognition software depicted above. Every element in your database is checked until a match is found. When you have millions of users, that’s an extremely slow process.

Database Indexing

Indexing improves the speed of a database query by creating a sorted copy of data that can be matched quickly. The index points directly back to the original data in the database, so once your database matches the query, it can grab the data quickly. Indexing is generally done for frequently queried column such as username. You SHOULD NOT index an entire database. Indexed data takes up more memory and takes longer to update, so it should be used with moderation. When used correctly, it can greatly improve your app’s performance.

It’s easy to create an index in Rails Active Record

To create a new table with an index in Active Record, use this as a template for your migration:

def change     create_table :products do |t|      t.string :name
t.index :name
end
end

To change an existing table:

def change         add_index :products, :name   end 

JSON Caching

By default the Rails JSON API doesn’t cache automatically in dev mode. In production, caches are stored in the cache folder in the tmp directory.

Setting up caching in dev mode only requires a small change in our code. First, create a file called tmp/caching-dev.txt

touch tmp/caching-dev.txt 

Set up custom cache_store

By default, Rails caches data using the machine RAM, which means you would not be able to scale to multiple machines. To plan for this eventuality, you should use :redis_store, which allows the cache to be shared between multiple servers. In config/environments/development.rb and in config/environments/production.rb , change config.cache_store like so:

config.cache_store = :redis_store 

In your development.rb file, you will have to uncomment the config.cache_store line.

You should also add the line:

config.active_job.queue_adapter = :sidekiq

This will allow us to outsource the creation of our caches to a background process, so we don’t burden our server.

You should also have the redis-rails and sidekiq gems installed:

gem 'redis-rails'
gem 'sidekiq'

Reconfigure your API controllers and models

The first thing you should do is add the touch: true attribute to any belongs_to relationships in your models. This will tell a model that it has been updated when the class it belongs to is updated. Then we can tell our model to create a cache whenever new data is saved to the database. This is our way of planning ahead. Rather than wait for a request from the client, we recreate the cache as soon as data is updated. This way, users don’t have to wait for a new cache when they reload a page after an update. Whenever we update data, we also create a new cache key to point to the data. posts.maximum(:updated_at) gives us the most recent saved post. Old cache files are automatically deleted by redis.

NOTE: I am not writing out all the code for your models. You would also need to set up caching for your User model, etc.

Below is the cache job. Active Jobs allows you organize the jobs of your server in a queue. This is helpful for intensive tasks like caching, where we want to use a queuing library like sidekiq.

#app/jobs/create_users_json_cache_job.rbclass CreateUsersJsonCacheJob < ApplicationJob
queue_as :default
def perform(*_args)
users = User.includes(:comments)
Rails.cache.fetch(User.cache_key(users)) do
users.to_json(include: :comments)
end
end
end

The next step is to reconfigure your controllers. In the example below, I added a line Rails.cache.fetch(User.cache_key(users)) . This asks the API to check for a User cache and load the data. We’re also asking the API to look for the data associated with our most recent cache_key. Fetch is a built in Rails method which reads and writes caches. If the key exists, fetch reads from the existing cache, but if it does not, fetch writes a new cache.

If you indexed and cached correctly, now when you query your database, it will only be slow the first time you load the index.html page in a session. From then on, it will be very fast.

Paginate your JSON

When your users start growing and you are handling a lot of data in your requests, you will need to begin paginating your data so you don’t do something like make requests for a million comments on a post, all at once. The ruby will-paginate gem is great for this. In you gemfile:

gem 'api-pagination' 
gem 'will-paginate'

In your controller, replace render with paginate :

class Api::V1::PostsController < ApplicationController
def index
paginate json: Post.by_date, per_page: 25
end
end

by_date is a scope method in your model like so:

class Post < ActiveRecord::Base
scope :by_date, -> { order(posted_date: :desc) }
end

Incorporating this into your caching mechanism may take some extra work.

The FrontEnd Bottleneck

Once you get data from your backend, how do you ensure that you serve it to your client efficiently? Nobody wants to see a loading pizza (above) forever!

The PRPL Pattern

PRPL stands for:

  • Push critical resources for the initial URL route.
  • Render initial route.
  • Pre-cache remaining routes.
  • Lazy-load and create remaining routes on demand.

PRPL means that you only need to make one initial fetch and you can pre-cache the data for your routes. This allows you to store data on the client side in a quickly accessible manner and navigate between routes without making additional fetches.

React Router PRPL

React router fundamentally runs via a PRPL pattern. React-Redux is capable of making one initial fetch, storing the data in state, and setting up many routes that can load without fetching more data from the backend.

Lazy load means that you can wait to load data that users rarely interact with, such as their profile update page or old post history. You can also lazy-load heavy data like images and videos.

Bundling and Code-Splitting

Bundling allows us to efficiently load script files by bundling them into one big file. Bundling solves the latency associated with fetching the code from many scripts. Webpack is a great library for bundling and minification in React. Webpack also allows you to use code-splitting to create multiple bundles. This way, you can lazy-load the important scripts and wait to load the scripts that are rarely used. Package managers like yarn and npm are used in React to manage bigger libraries that your App depends on.

The Virtual DOM

Updates to the regular DOM tend to be slow, mostly because of a lack of precision. A program might accidentally update an entire tree when it only needed to update the innerText of a single element. Frontend JS frameworks like React and Angular help us optimize the precision of our DOM updates through a tool commonly referred to as the virtual DOM. The virtual DOM is just an abstract representation of the real DOM as a hierarchy of objects. Since object lookup is so fast, React can quickly compare a previous version of the virtual DOM to a new version of the DOM in a process called ‘diffing’. With diffing, react can precisely locate all the DOM nodes that need to be updated, and batch all those updates for maximal performance. The diffing process is a great optimization for complex apps that share state across many components.

HTTP Caching

HTTP caching dramatically improves the performance of apps. HTTP caching allows your browser to save fetched pages. This is done automatically by the browser, though there are ways to control how it’s done. I won’t go into this in detail here, but I am attaching a couple resources.

Client-Side Caching in React

If a user visits your website, closes the browser, leaves for lunch, and then comes back to the page later, there is no good reason to refetch all the page data again. The browser’s built in localStorage and sessionStorage are great for saving data so you don’t have to refetch later on.

//set
localStorage.setItem('key', 'value')
//get
localStorage.getItem('key')

localStorage.setItem will store data even when a user closes the browser and comes back later. You can conditionally ask for localStorage data in your event listeners or lifecycle methods to tell React whether it should refetch from the backend.

Here’s an example in sudo code:

const cache = localStorage.getItem(data)
if (cache) {
this.setState(
{
siteData: JSON.parse(cache)
})
return
} else {
\\fetch data from backend
}

If you want the stored data to expire when the current session ends, you can use sessionStorage , which comes with the same methods as localStorage .

Service Workers

I would be remiss if I didn’t mention service workers. These are the proxy servers that handle offline experience. They essentially stand between your app and the network, handling events that require a network connection. They can do things like ‘background sync,’ which defers an action until the user has a stable network connection. They don’t necessarily speed up your application, but they do make your app smarter.

Conclusion

Performance is a HUGE topic with many different areas of expertise. This article barely scratches the surface, but if you incorporate each of the elements we have covered, you will dodge some of the most common performance issues with a growing app. I hope this helps!

--

--