3 Essential Tools to Boost your React App’s SEO

Preston Wallace
14 min readJan 4, 2019

--

You’ve created an amazing Web App. You know it’s amazing, and everyone that sees it is quite impressed. But you have one problem: you aren’t getting any traffic to your app. Why?

The answer: Search Engines React differently to React (Get it…? React… Ok, moving on).

My React SEO Demo App (referenced in this post)

Demo App Code on GitHub:
https://github.com/wallacepreston/react-seo-demo
Demo App Deployed on Heroku:
https://react-seo-demo-app.herokuapp.com
YouTube Companion Video:
https://www.youtube.com/watch?v=rYfV2gs7VgQ

Why is SEO Important?

In today’s world, Search Engine Rankings are mind-bendingly important: Research shows that “95% of web traffic goes to sites on Page 1 of Google.” That means if you want your app to be “findable” by the people who will use it, you will have to optimize it for Google and other search engines.

What’s more, according to a study by Advanced Web Rankings, over 65% of all web traffic from a search engine results page (SERP) go to the top 5 ranked listings. The stats tell the truth: It’s important to rank well for Google!

So how do I get on the first page of Google?

Everyone wants to know the answer to that million-dollar question. Without going into great detail, there are a few things to keep top-of-mind when trying to improve your web app’s ranking. Note: This is not an exhaustive list, and these are NOT in order of importance or “weight” for Search Engines!

  1. Content, Optimized for your Keywords
  2. Domain Name and Age
  3. Page Speed and Mobile Friendliness
  4. Inbound Links
  5. An Accessible URL
  6. Technical SEO (Title, Meta Tags, Image Alt Tags)
  7. Content Viewable By Search Engines (i.e. Google can crawl your pages)

While SEO in general is very important, there have been many authoritative articles written on the subject, and that is not the topic of discussion here. What I am going to focus on is the last three on my list: Accessible URL, Technical SEO, and making your site Googlebot Friendly.

Why is React Different?

There are a few problems for search engine rankings when it comes to a vanilla React app. Mainly, these problems are:

  1. Single URL for the entire app (boo! hiss!)
  2. Meta Tags not being set correctly (like, did you just forget?)
  3. JS not loading correctly/in time/at all. (where is your content hiding…?)

To understand why these are issues, we must first understand how React works at its basic level.

In essence, to build a Single-Page Application (SPA), we create an app that creates its own “virtual” DOM and inserts it into a single tag of a static HTML file. From there, our entire app is created and loaded. There may be different URIs that are used for API requests or callback URLs from OAuth or other applications, but the “meat” of our site is loaded as one page with multiple “views” called Components in React.

The index.html document that my app is loaded from has no body other than the div with the id of app.

The Good

The reason why this is AMAZING in React is because it allows a user to dynamically control the page/app being used. Views can change instantaneously since components re-use code wherever possible, thus creating no page-refresh overhead.

But you already knew that about React— otherwise, you wouldn’t be here!

The Bad

There are a few issues with the way a vanilla React app is set up:

Problem 1: One URL to Rule them all…

The problem (or one problem) a Single-Page Application poses for SEO is that, without any modification and editing, it only has one URI. If there is truly only one “page” (HTML file), there can only be one URI. Furthermore, all Meta Data is set between the <header> tags. Our React components are rendered in the <body> section, thus making it impossible to change meta tags for different views, even if we do manage to change the URI.

It has been said: “Your React Single Page Application should not be a Single URL Application”. When we create a React app, Google will inherently see it as what the file is: A single page. From this page on my SEO Demo App, you can see how the URL is NOT updated when components are switched via certain methods (in this case, a conditional depending on a variable in state).

When the ‘Toggle’ button is clicked, the view switches to show the next component, but…

Oops…

Notice how the URI does NOT change? From a user-friendly perspective, this is somewhat annoying, because you can never share a link to my Basketball Cat page (bummer, especially if you like cats and basketball).

But for SEO, this is absolutely terrible! Imagine if our entire app was built this way! The only page Google will index is the homepage! This is a problem with switching components in this way, and it’s one of the big reasons to use React Router (discussed later on).

Problem 2: Meta Tags

If we want our app to show well in Google’s listings, we need unique page titles, and (less important) unique descriptions for each page. Otherwise, all the pages in our site will show up with the same listing in Google.

React doesn’t do this for us, and there is no way to change these tags in React, out-of-the-box. So we need something to help with that. (Hint: Keep reading)

Problem 3: JS Loads Differently (or not at all)

In the past, Google has not even indexed any javascript, opting only to crawl HTML pages. Bummer, because we want to be able to do Awesome stuff!

Our JS Needs to load for Awesome things to happen! (Image Credit: DerickBailey.com)

That has since changed, and Google now runs JavaScript. Still, there are some drawbacks to Single Page Applications that are not also rendered Server-Side:

  1. Some obscure search engines not crawling JS.
  2. Depending on how your JS is transpiled or polyfilled, Google’s bot still may not be able to run it. (more on this later)
  3. Single Page Applications are crawled slower.
  4. Some content may not be seen by Google (more on this below)
  5. …and other issues.

To go Isomorphic or not to go Isomorphic? That is the Question.

So how do we fix these issues with React’s inherent problems with SEO? There is a lot of talk about SSR and Isomorphic apps. In truth, Google has stated that it is best to dynamically render JS content. Many times, you will want/need to render your app Server Side (making it an Isomorphic App) as well as on the client side (the way React was originally intended).

But who wants to do that? That’s what I thought…

Creating an Isomorphic React App is not always necessary for more application-based apps (versus blogs or e-commerce sites). If our app is something like a game, a functional program, or a social networking type site, we might not need to go to the trouble of setting up Server Side Rendering.

If this is the case, our site’s purpose is not to glean as many Search Engine clicks as possible. However, we do still want it to be found as well as possible, so let’s focus on improving our app’s SEO without using an Isomorphic app (no SSR).

So how do we improve our app’s Search Engine ranking already!?

This is the moment you’ve been waiting for. The reason you’re here: To learn about three solutions to React’s inherent SEO problems:

  1. React Router
  2. React Helmet
  3. F̶e̶t̶c̶h̶ ̶a̶s̶ ̶G̶o̶o̶g̶l̶e̶ ( EDIT: transitioned to URL inspection in the new Google Search Console)

SOLUTION: Tool #1, React Router

Used by most React developers, React Router is a library for handling routing of a React app. Along with it come two important ways of handling routing: HashRouter and BrowserRouter.

Without going into too much of the specifics, HashRouter is more backward compatible, and historically, it has not given Google a new URI to index as a new page (or view) for each route. Google seems to be making some improvements in the area of hashtags, indexing some, but they also said not to use HashTags in URLs for pages you want to be indexed by Google. So try not to!

In my demo app, the component I render using HashRouter is the following:

import React, { Component } from 'react';
import {HashRouter} from 'react-router-dom'
import HashLinks from './hash-links'
class UsingHashRouter extends Component {
render() {
return (
<div>
<h1>Hash Router</h1>
<HashRouter>
<HashLinks />
</HashRouter>
</div>
);
}
}
export default UsingHashRouter;

Notice, all we did was import {HashRouter} from React Router. Then, inside the HashLinks component, we use Switch and Route (also from React Router) to render the components, based on the URL.:

import React, { Component } from 'react';
import {Switch, Route, Link} from 'react-router-dom'
import Dogs from './dogs'
import Cats from './cats'
class HashLinks extends Component {
render() {
return (
<div>
<Link to="/dogs/">View Dogs</Link> |
<Link to="/cats/">View Cats</Link>
<Switch>
<Route path='/dogs' component={Dogs} />
<Route path='/cats' component={Cats} />
</Switch>
</div>
);
}
}
export default HashLinks;

BrowserRouter, while less accessibility-friendly (doesn’t support IE 9 and lower), it creates a new URI for each Route, without the hash tag. Furthermore, it still allows Google to index “rich snippets” using HTML element IDs. It’s very similar to HashRouter, except anywhere that HashRouter is defined, replace it with BrowserRouter:

import React, { Component } from 'react';
import {BrowserRouter} from 'react-router-dom'
import BrowserRouterLinks from './browser-router-links'
class UsingBrowserRouter extends Component {
render() {
return (
<div>
<h1>Browser Router</h1>
<BrowserRouter>
<BrowserRouterLinks />
</BrowserRouter>
</div>
);
}
}
export default UsingBrowserRouter;

And inside the BrowserRouterLinks component:

import React, { Component } from 'react';
import {Switch, Route, Link} from 'react-router-dom'
import Dogs from './dogs'
import Cats from './cats'
class BrowserRouterLinks extends Component {
render() {
return (
<div>
<Link to="/routing/browser-router/dogs/">View Dogs</Link> |
<Link to="/routing/browser-router/cats/">View Cats</Link>
<Switch>
<Route path='/routing/browser-router/dogs/' component={Dogs} />
<Route path='/routing/browser-router/cats/' component={Cats} />
</Switch>
</div>
);
}
}
export default BrowserRouterLinks;

SOLUTION: Tool #2, React Helmet

Most React developers use React Router, but less know about or use React Helmet. This is a library that allows us to set the HTML metadata in the header of any given component. The header tags we are most interested with here look like this:

<head>
<title>This is Where You Set The Page Title</title>
<meta name="description" content="This is an example of a meta description. This will often show up in search results, though many search engines generate their own.">
</head>

The above code will produce the following SERP (Search Engine Results Page) listing:

An Example SERP (Search Engine Results Page) listing produced by the above code.

By default, React has no way to set meta data or title tags in the header of each component. Thus, this is how my Demo App’s 2 SERP listings look without changing anything:

Notice the two listings are virtually IDENTICAL (except for “cats” in the URL, and a little different content). Since Google does its best (pretty good!) to generate these from the content of your pages, the content may change. But the “title” (in blue in the SERP listing) will not change, and the “description” tag is still important for SEO purposes. Thus, to change the content inside the <head> tags, we will use React Helmet. With this npm package, we can set our own <title> and <meta> tags, thus creating something like this:

How to Install React Helmet

So let’s get this running: First things first, install React Helmet via npm:

npm install react-helmet

Then, it’s as simple as adding our desired tags inside the <Helmet> component inside our exported JSX. Here’s an example:

import React from 'react';
import {Helmet} from 'react-helmet'
const App = () => (
<div>
<Helmet>
<title>Here's the Title!</title>
<meta name="description" content="This is what you want to show as the page content in the Google SERP Listing" />
</Helmet>
<h1>My Amazing React SEO Page</h1>
<p>Hello World!</p>
<ChildComponent/></div>
)

Don’t forget to import {Helmet} at the top of the file!

Now, behind the scenes, Helmet sets our title and description tags. We just need to put it inside the JSX I’m exporting, and Helmet puts it inside the head for us.

SOLUTION: Tool #3, F̶e̶t̶c̶h̶ ̶a̶s̶ ̶G̶o̶o̶g̶l̶e̶ ( EDIT: transitioned to URL inspection in the new Google Search Console)

EDIT: This Google feature has transitioned to F̶e̶t̶c̶h̶ ̶a̶s̶ ̶G̶o̶o̶g̶l̶e̶ URL inspection in the new Google Search Console. Screenshot:

New Google Search Console, with URL inspection

Many times with React, there will be issues with the googlebot fetching our pages. If our JavaScript takes too long to load (i.e. asynchronous calls with large data sets) or if we don’t have our polyfills set up correctly, the googlebot will not be able to render the content, because it can’t even run the JS.

Bummer. Why is this?

The googlebot uses a different version of the V8 engine from their very own Chrome browser (whaaaat?), meaning just because your javascript runs in the browser doesn’t mean the googlebot will be able to.

Disappointed? Me too, but never fear: There is a solution!

Fetch As Google

Enter the secret weapon of SEO experts

(chuckles devilishly)

OK, SEO experts need a lot more than this to be classified as true masterminds, but that is beside the point…

This handy little tool “[…] enables you to test how Google crawls or renders a URL on your site.” To get set up with Fetch as Google, we click “ADD A PROPERTY” and enter our URL to add it.

This will take us to a new page that gives us a file to add to our site. This needs to be at the root, so the /public/ directory is a great spot for it(depending on how your app hierarchy is set up). Once we’ve uploaded the file and Google knows you own it, then we have access to a plethora of tools, one of which is (you guessed it) Fetch as Google:

Here, we simply add the URI that we want to test, and click “FETCH AND RENDER”. After clicking the checkbox, we are taken to a page that shows the difference between how the user sees the page and how Google sees it.

Cool, huh?

We can use this for any URL in our app, so once you’ve “connected” your site, you’re free to start testing!

Here, there are basically three scenarios:

  1. Neither the box on the left or right have any content (uh-oh)
  2. The left box has less/different content from the right box (close, but no cigar)
  3. They both have the same content (great! …kind of).

If we get either of the first two, we should try to do our best to make it so that Google can see what we see when we open a browser and go to that URI ourselves. However, even if we are seeing the same thing in both boxes, it might not be what you see when you open your own browser window.

On my demo app, I’ve created an example of how data loaded via asynchronous calls may not be visible to the googlebot. Actually, all I did was load nothing before the component mounts, then once it does, load 1 message and set 10 timeouts at different intervals. This is to show what happens if our data takes longer to load (anywhere form 500ms to 10 sec). Here’s the code if you’re curious:

class IncrementalLoading extends Component {
constructor() {
super();
this.state = {
message1: '',
message2: '',
message3: '',
message4: '',
message5: '',
message6: '',
message7: '',
message8: '',
message9: '',
message10: '',
};
}
componentDidMount() {
this.setState({
message1: 'Message 1(immediately after component mounts): Googlebot will always crawl'
})
setTimeout(() => {
this.setState({
message2: 'Message 2 (500ms): Googlebot will almost certainly crawl'
})
}, 500);
setTimeout(() => {
this.setState({
message3: 'Message 3 (2 sec): Googlebot will probably crawl'
})
}, 2000);
setTimeout(() => {
this.setState({
message4: 'Message 4 (3sec): Googlebot less likely to crawl'
})
}, 3000);
setTimeout(() => {
this.setState({
message5: 'Message 5 (4 sec): Googlebot may or may not crawl'
})
}, 4000);
setTimeout(() => {
this.setState({
message6: 'Message 6 (5 sec): Googlebot MIGHT crawl'
})
}, 5000);
setTimeout(() => {
this.setState({
message7: 'Message 7 (6 sec): Googlebot probably will NOT crawl'
})
}, 6000);
setTimeout(() => {
this.setState({
message8: 'Message 8 (7 sec): Googlebot almost certainly will NOT crawl'
})
}, 7000);
setTimeout(() => {
this.setState({
message9: 'Message 9 (8 sec): Googlebot definitely will NOT crawl'
})
}, 8000);
setTimeout(() => {
this.setState({
message10: 'Message 10 (9 sec): Googlebot is on to the next page by now.'
})
}, 9000);
setTimeout(() => {
this.setState({
message11: 'Message 11 (10 sec): If you\'re seeing this, you\'re definitely not Google.'
})
}, 10000);
}
render() {
return (
<div>
<h1>Data Loaded</h1>
<h2>Simulates data that takes differing amounts of time to fetch.</h2>
<h4>{ this.state.message1 }</h4>
<h4>{ this.state.message2 }</h4>
<h4>{ this.state.message3 }</h4>
<h4>{ this.state.message4 }</h4>
<h4>{ this.state.message5 }</h4>
<h4>{ this.state.message6 }</h4>
<h4>{ this.state.message7 }</h4>
<h4>{ this.state.message8 }</h4>
<h4>{ this.state.message9 }</h4>
<h4>{ this.state.message10 }</h4>
<h4>{ this.state.message11 }</h4>
</div>
)
}
}Below, you can see a rendering of my own page:

As you can see, there are 11 messages on my page, but only 6 of them were crawled by Google. If Google can’t see them, that means they will not be indexed, so they will not show up in our search results. (sheuwt, man)

There are other (slightly more complicated) ways around this, one of which is Server Side Rendering, but I will not get into that right now. For now, the important thing to remember is that, when doing anything asynchronous, we must be careful what is loaded when, since how long it takes will have a big effect on how the googlebot sees (or doesn’t see) our content.

Without SSR, we can still make this better. Wherever possible, we need to load all important-to-SEO-content first before any asynchronous calls are made. For example, on my SEO Demo App’s cute animals page, I render the titles of the pictures before I load the pictures themselves, so when we look at how Google fetches this page…

We see that the titles are ALL loaded, even though the data of messages 7 and onward are NOT loaded. This way, Google will index the titles, hopefully gleaning more link-juice from the keywords in the titles. That way, people can find our cute animals!

(awwwww….)

What Next?

If your app does depend heavily on SEO, and you’re ready to take it to the next level, Try out SSR (Server Side Rendering) to create an Isomorphic app.

There is far more to discuss in the way of SEO, but for right now, we have just stuck to the three tools we’ve used:

  1. React Router: Gives us a unique URL for each of the views in our app.
  2. React Helmet: Allows us to set title, description and other header tags.
  3. Fetch as Google (EDIT: Now URL inspection) : Helps troubleshoot Google’s ability to view our content.

Just using these three essential tools, you will be able to boost your app’s ranking in Google and other search engines.

I find Search Engine Optimization challenging and fun. It’s akin to “buying” real estate: The more you work on it, the more traffic to your site you “own”. It’s challenging at times, but chip away at it, and you’ll eventually get the results you’re looking for!

--

--

Preston Wallace

I’m a Full-Stack Software Engineer at Liquidity Services and Teaching Fellow at FullStack. While not developing, I enjoy CA with my beautiful wife and children.