Social Sharing With React and Vue Without Pre-Rendering or SSR
Send crawlers simplified HTML files with relevant meta-tags
Unfortunately, they cannot. Googlebot is the only major crawler that doesn’t stop at your
Pre-Rendering and SSR Are Not the Only Solutions to Solve This Problem
SSR is a technique in which all the URLs of your app are rendered on your server whenever a user or a crawler/bot hits your back end to ask for a specific URL.
Instead of sending the
Sounds easy? Well, it can be, in particular for lightweight apps and websites that are built from scratch in Next.js or Nuxt.js. For more complex apps or apps originally built outside such frameworks, you can, however, run into various issues implementing SSR.
Pre-rendering or dynamic rendering
Hence, you might look into the other option typically employed to solve the problem: pre-rendering or dynamic rendering. For all static pages you use that aren’t changed at all, pre-rendering them before deploying your app is a very simple solution.
As these pages don’t change unless you do so manually and redeploy, you can pre-render them every time you deploy and serve them just as if they were server-side rendered.
Unfortunately, users who share content on social media sites typically don’t share simple static pages such as your terms and conditions or your landing page.
Users want to share content they found — a blog post, an article or anything else that is interesting to let others know they found it. If these pages are user content, their URLs are necessarily created dynamically. Which means pre-rendering whenever you deploy your app is not a viable solution.
The answer for user-generated dynamic content is a variation on pre-rendering — dynamic rendering. Third-party sites such as prerender.io allow you to pre-render 10,000s of URLs and add new dynamically created URLs on the fly.
These third-party sites also regularly crawl all these URLs to see if they are updated. It’s not a perfect solution compared to SSR (because content that is pre-rendered is inevitably lacking behind the current state of a page by a few days), but for most SPAs it is easy enough to set up and works well to have social sharing work without problems.
A Third Solution That Does Not Involve SSR Frameworks or Pre-Rendering/Dynamic Rendering
When I searched this topic to implement our first iteration of Epiloge, I came across a reference on Stack Overflow that pretty much said: “Just give social media crawlers what they want — the meta tag information.”
I was intrigued that this might be an even easier solution to the problem. However, it has both drawbacks and benefits.
If you do not worry about search engines other than Google, and only want social sharing on large social media sites such as Facebook and Twitter, this solution might be for you.
It is pretty much a simplified server-side rendering solution that does not require Next.js or Nuxt.js. You create HTML pages with a
<head> tag, a simplified body, and all required meta tags that the social sharing crawler can read.
Implementing This Third Solution in a Node Back End
We are using Node.js with Express as our back-end server at Epiloge. If you are not using Node.js, you should still be able to use the principles described below. They are straightforward to integrate into any back end.
1. Serve your index.html from your back end for this solution to work
If you have your Vue.js, React.js, or other SPA deployed and accessed by users directly from a cloud platform such as Amazon S3 without having the user first access your back end, the solution presented here won’t work.
You need all traffic accessing your website, in our case, our domain www.epiloge.com, to hit your Node.js server first. This way, you can check the user agent of incoming traffic for crawlers — while you serve your
index.html as a static file to all other users.
The code above is easily explained. Insert a new express route calling it something like
nonSPArouter that only deals with crawlers hitting your server. You can check the user agent of crawlers for the most common ones.
In the example above, I only included Facebook and Twitter, but you could add a lot more such as LinkedInbot, Quora, Pinterest, Slackbot, Whatsapp, Telegram, etc.
2. For static pages, hardcode the page and serve it with your preferred template engine
We are using Jade as a templating engine. In our Node.js server file, we install Jade with
npm install jade and add
app.set(‘view engine’, ‘jade’) to the
This is all you need to integrate Jade and have it work as a template engine. Create a
bot.jade file under
/views in your back-end structure and you are good to go to serve static HTML files.
3. For dynamic pages, you need to get data from your database first and then include it in the HTML you render with Jade
For our app, we wanted social sharing to particularly work for URLs of projects, papers, and articles people share on Epiloge.
For this to work, we first need to get the title, the start of the text included in the body, and the relevant cover image, if any, to set meta tags in our
bot.jade template file.
The Jade template engine will render the template filled with the information we get from the database. IT will then send the rendered static HTML to the crawler which can parse any required information for social sharing from its content.
And We Are Already Done
And that’s it, without a lot of code and without any third-party framework, plugin or service, we can make social sharing sites access data from URLs that are shared. The trick is to simply hand a rudimentary HTML file with meta tags from which they can scrape information.
Is this solution suitable and recommended for every app you build and deploy?
Probably not. If you have a more complex app in mind with social sharing and search engine indexing for all search engines as a critical part of success, you should probably look into frameworks such as Nuxt.js or Next.js.
If you have only a few hundred or at most a few 10,000s dynamic URLs and don’t want to use SSR… look into dynamic rendering.
Oh, and by the way, dynamic rendering is something Google actually specifically endorses, so don’t worry that you aren’t allowed to do that, you are.
If you already have a Vue.js or React.js built, don’t want to invest a lot of time and effort to integrate SSR into your back end, don’t want to bother with dynamic rendering, or need crawlers to access current data at all times, this third solution may be for you.