A deep dive into ShareChat Progressive Web App (PWA)

Saket Diwakar
13 min readApr 12, 2020

--

Introduction

The inception of the ShareChat PWA started with a goal of targeting traffic coming from search engines and converting them into Play store installs. However during the journey, we changed the course and decided to build a full fledged web app which can act as a standalone platform where people could consume, share and engage with the content in the same way they do on our existing Android app. Ever since, we have been working on adding features, scalability, optimisations, SEO and an immersive user experience as close as possible to the native apps. And we have already come a long way, which I will be talking about in this article and I will try my best to keep things as much in context.

Tech Stack

Our Frontend tech stack consists of one of the most popular combination of React, Redux, Sagas and Webpack, while NodeJS present in the Backend. For the most optimized and performant CSS, we use the Atomic CSS library. Apart from this, we use React Router v5, React Infinite, React Loadable etc. for various functionalities, some of which I will talk about in this article.

Why Atomic CSS and not Styled-components?

Same as Atomic CSS, CSS in JS libraries like Styled-components and Styletron solve the forever existing problems with CSS like lack of scoping, implicit dependencies, order specificity, bloating etc. and provide benefits of dynamic styling, dead code elimination, server side rendering, critical CSS etc. for developers out of the box. However the big down side with them is that we need to write CSS properties inside the JS components all the time. Here is an example:

const Button = styled.a`
display: inline-block;
border-radius: 4px;
padding: 10px;
margin: 10px 5px;
width: 100%;
background-color: transparent;
color: white;
${props => props.primary`
background-color: white;
`}
`
...
<Button
href="https://sharechat.com"
primary
>ShareChat</Button>

The same thing can be achieved with Atomic CSS as:

const Button = ({primary, children, ...props}) => {
const classes = `D(ib) Bdrs(4px) P(10px) My(10px) Mx(5px) W(100%) C(white) ${primary ? `Bgc(w)` : `Bgc(t)`}`
return (
<a
className={ classes }
{ ...props }
>{ children }</a>
)
}
...<Button
href="https://sharechat.com"
primary
>ShareChat</Button>

which looks really clean, concise and readable and hence makes the developer’s life easier in many ways. We didn’t find any down side with this library yet. However it’s still upto you to do a POC beforehand and decide accordingly.

Performance Optimizations

According to a Google study, 40% of people abandon a website that takes more than 3 seconds to load. Moreover, a 1 second delay in page response can result in a 7% reduction in conversions.

Yeah, optimizations have been a big part of our journey so far. Be it entry bundle size, images, re-rendering etc., we have experimented a lot and achieved a lot. We have considered Time to interactive or in short TTI as our primary performance metric to measure because it plays a crucial role in the user experience and overall app performance. Following are the tips and tricks that we have implemented:

1. Bundle Size

The entry bundle size has always been on top of our optimization todo list, as this is the code which will be downloaded first, irrespective of the route. The most obvious thing to implement in this context is splitting the bundle into lazily loadable smaller chunks which could be downloaded later on demand. We have used webpack supported import syntax as well as React-loadable library for this purpose which comes with the server side support too! This made our life really easy once we split our bundle based on different components and routes.

Here is an example:

const LoadableButton = Loadable({
loader: () => import('./Button'), // webpack dynamic import
loading: Loading, // show loading while bundle is getting fetched
})

const MyComponent = () => {
const [showButton, setShowButton] = useState(false)

const onClick = () => {
setShowButton(true)
}

return (
<div>
<button
onClick={ this.onClick }>
SHOW LAZY LOADED BUTTON
</button>
{ showButton && <LoadableButton/> }
</div>
)
}

Next in line thing for us was to make use of the Webpack’s Tree Shaking ability in order to eliminate the unused library code. In a nutshell, this is how it works:

// Bad Practice (Will add the entire library in our bundle)
import _ from "lodash"
const obj = { src: "https://picsum.photos/200/300" }
const src = _.get(obj, "src")
...// Good Practice (Tree Shaking Friendly)
import { get } from "lodash"
// or
import get from "lodash/get"
const obj = { src: "https://picsum.photos/200/300" }
const src = get(obj, "src")

If you’re using babel-preset-env, it will automatically convert the ES6 modules into more widely compatible CommonJS modules, which makes it hard for Webpack to shake the code tree. So do not forget to configure babel-preset-env to leave ES6 modules alone. Wherever you configure Babel (be it in .babelrc or package.json), just make this one line change:

{
"presets": [
["env", {
"modules": false
}]
]
}

You can read more about it in detail in this blog by Google.

Furthermore, we have extensively used the bundle analysis tool for webpack called Bundle Analyser, which enables us to visualize both uncompressed and gzipped size of webpack output files with an interactive zoomable tree-map. This way, it becomes really easy to figure out which files and code are getting unnecessarily added to the bundles.

For this, all you need to do is add the following lines in your webpack config file:

const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;module.exports = {
plugins: [ new BundleAnalyzerPlugin() ]
}

Now, just run your build as usual and it will automatically open a browser tab showing something like this:

Image credit: https://www.npmjs.com/package/webpack-bundle-analyzer

2. Defer Loading of Images

When there is a list of images in the above the fold content of first DOM render, setting an image src after the component has been mounted makes a lot of difference.

For example, in our languages list page, just changing this:

const Parent = () => {
return (
{
ListofImages.map(src => (
<img src={ src } />
)
}
)
}

to this:

const Image = (props) => {
const [src, setSrc] = useState(null)
useEffect(() => {
setSrc(props.src)
}, [])

return (
<img src={ src } />
)
}
const Parent = () => {
return (
{
ListofImages.map(src => (
<Image src={ src } />
)
}
)
}

made this difference in PageSpeed score and metrics.

(Before deferred loading)
(After deferred loading)

As you can see, the TTI metric reduced from 8.5s to 6s which is way more than the ideal time but it’s still a great improvement.

3. Lazy Loading of Images

When there is a list of images in the below the fold content of first DOM render, loading the images only after it enters the viewport makes a hell lot of difference (especially if your app contains scrollable list of thousands of images). For this, we have used the Intersection Observer API which has a decent support across all major browsers.

This is how we can implement this,

let intersectionObserverconst LazyImage = (props) => {
const [visible, setVisible] = useState(false)
const ref = useRef()
useEffect(() => {
intersectionObserver = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
if (!visible) {
setVisible(true);
}
}
}
);
// start observing
intersectionObserver.observe(ref.current);
return () => {
intersectionObserver.disconnect();
}
}, [visible])
return (
<div ref={ ref }>
<img src={ visible ? props.src : null } />
</div>
)
}
...<LazyImage src="https://picsum.photos/200/300" />

4. SSR

Server side rendering aka SSR helped us in reducing the first load time of pages significantly. Moreover it also helped in faster indexing of pages for SEO purposes as not all search engines support javascript in their crawlers (See this blog post for more information on this).

On the server side, you will often need to fetch data before rendering your component and then pass that data to the client side so that the components are in sync. We have used the react-router-server library for this purpose and it’s been working fine for us.

For handling the document head on both client and server side, we are using the thread-safe fork of react-helmet called react-helmet-async. According to their documentation,

react-helmet relies on react-side-effect, which is not thread-safe. If you are doing anything asynchronous on the server, you need Helmet to encapsulate data on a per-request basis, this package does just that.

5. Prevent Re-renders

React reconciliation algorithm for DOM updation is pretty complex and would be costly too if the application is monolithic. Therefore it’s our responsibility to ensure that the application views are not getting re-rendered unless we want them to. For this, React suggests us to use the concept of keys, React.PureComponent, React.memo (useMemo in case of hooks) and shouldComponentUpdate wisely for all the class and functional components.

Now, if you’re also using Redux and selectors to get data from the store like we do, you need to ensure that all the selectors are giving the same data if data inside state hasn’t changed, otherwise it will cause unnecessary re-renders. For example, the following selectors will always cause re-render and hence a bad practice:

// Example 1: always returns a new reference
const getList = (state) => state.list.filter(el => el > 100);
// Example 2: always returns a new reference
const getAnotherList = (state) => (
state.list.map(el => state.isLoggedIn ? el : null)
);
// Example 3: always returns a new reference
const getObject = (state) => {
return {
locales: state.locales,
theme: state.theme
}
};
// Example 4: always returns a new reference if list is empty
const getYetAnotherList = (state) => state.list || []
...const mapStateToProps = (state) => ({
list: getList(state),
list2: getAnotherList(state),
list3: getYetAnotherList(state),
getobj: getObject(state)
});

which can be solved with memoization of selectors with the help of libraries like reselect etc. Let me show you the same examples as above with reselect:

import { createSelector } from "reselect";// Example 1: returns a new reference only if state data changes
const getList = createSelector(
(state) => state.list,
(list) => list.filter(el => el > 100)
);
// Example 2: returns a new reference only if state data changes
const getAnotherList = createSelector(
[
(state) => state.list,
(state) => state.isLoggedIn
],
(list, isLoggedIn) => list.map(el => isLoggedIn ? el : null)
);
// Example 3: returns a new reference only if state data changes
const getObject = createSelector(
[
(state) => state.locales,
(state) => state.theme
],
(locales, theme) => {
return {
locales,
theme
}
}
};
// Example 4: returns a new reference only if state data changes
// NOTE: no need to use reselect for this
const defaultArray = []
const getYetAnotherList = (state) => state.list || defaultArray
...const mapStateToProps = (state) => ({
list: getList()(state),
list2: getAnotherList()(state),
list3: getYetAnotherList(state),
obj: getObject()(state)
});

There is one more frequent mistake that people make while assigning functions to event listeners whenever there is a need to pass data along. For example:

// BAD PRACTICEconst MyComponent = ({ list }) => {
const onButtonClick = (e, item) => {
// handle click
}
... return (
{ list.map((item, index) => (
<button
key={ index }
onClick={ (e) => this.onButtonClick(e, item) }
>
{item.name}
</button>
)}
)
}

As shown above, this will assign a new function reference in onClick every time React is deciding whether it should re-render the DOM or not and hence will cause a re-render.

Instead we should separate the button child as a different component and the problem goes away.

// GOOD PRACTICEconst Button = ({ item, onButtonClick }) => {
const onClick = (e) => {
onButtonClick(e, item)
}

return (
<button onClick={ onClick }>
{item.name}
</button>
)
}
const MyComponent = ({list}) => {
const onButtonClick = (e, item) => {
// handle click
}
...return (
{ list.map((item, index) => (
<Button
key={ index }
item={ item }
onButtonClick={ onButtonClick }
/>
)}
)
}

You can find more examples of bad practices and the right way to do them here.

6. Perceived Performance

Once you have tried all sorts of optimization techniques, as your app grows, sooner or later you will have to work on the perceived performance of your app, which just might work in your favour even if the actual performance is not upto the mark. Here is a great presentation on this from Fluent Conf 2017.

The main concept is to make your users think your website is fast by showing them progress bars, loading images in the back, getting data asynchronously and so on. (source)

We have extensively used Shimmers, a concept originally coined by Facebook, everywhere on our app in order to fill the gap between api data fetching and actual UI rendered on screen and it resulted in a decent perceived performance for us. Here is an example gif:

Image credit: https://www.uplabs.com/posts/loaderviewlibrary

The Results

Seems like we have talked about a lot of stuff already. Let’s talk about what have we achieve so far.

  • So, we handle around 400,000+ active users traffic on a daily basis on PWA.
  • The entry bundle size is around 170 KB gzipped.
  • The TTI metric is around ~ 4.7 s for the landing page on fast 3G network.
  • The PageSpeed score of our landing page is 82+ on mobile and 95+ on desktop.

The Future

1. New Features

The upcoming features that we are planning to have are the content creation, personal/group chat and follow feed which will complete the circle of content creation and consumption. We will also keep on replicating the existing features on our KaiOS web app as well as on the AMP pages.

2. renderToNodeStream()

(LightHouse Suggestion)

The ReactDOMServer object enables us to render components to static markup. Currently we are using it’s most widely used renderToString method to send static HTML from our node server. However, there is a well-known issue with it:

In NodeJS, if a function takes a long time to execute, nothing else can happen, including the triggering of I/O callbacks. This is also known as “blocking the event loop”. renderToString is a synchronous operation that blocks the event loop for a considerable amount of time because server-side rendering a React application isn’t cheap.

This causes a lag in event loop and increases the Time to First byte (TTFB) metric. As we looked around, we found that the event-loop-friendlier version of rendering React is the renderToNodeStream method which landed in React v16.x and makes use of NodeJS Streams, which are part of the asynchronous non-blocking I/O operations group. Therefore, we are planning to migrate from renderToString to renderToNodeStream very soon.

3. React 17.0

After having the React.lazy and Suspense apis landed recently, it’s been announced that React 17.x is going to succeed React Fiber 😍🤩. It will be focused on asynchronous rendering of components, which will minimize the impact of computing and network speeds on the user experience and we’re excited!

For this purpose, the existing lifecycle hooks componentWillMount, componentWillReceiveProps and componentWillUpdate will be replaced by the new getDerivedStateFromProps & getSnapshotBeforeUpdate hooks. These old hooks will only be supported with a prefix “UNSAFE_” which is why everyone is advised to make changes accordingly.

4. Gzip vs Brotli

According to a study on top 1000 websites, it’s pretty much evident how much the new open source text compression algorithm Brotli can help with saving the bytes and making the compression-decompression more efficient.

Javascript files compressed with Brotli are 14% smaller than gzip. HTML files are 21% smaller while CSS files are 17% smaller than gzip.

What this means is more clear in the following image which represents the comparison between uncompressed, gzipped and brotlified file’s load time, render time and size metrics over a slow 3G network (source),

Here is the support for Brotli in major browsers:

Clearly, it makes sense to serve the static text files with Brotli as the preferred compression and with Gzip as the fallback.

5. Jpeg vs WebP

WebP is a modern image format which provides better lossy and lossless compression for images on the web. It offers file sizes that are around 25–34% smaller than JPEG without a quality gap. It also provides transparency (alpha channel) like PNG, and the ability to animate images like the GIF format.

Here is the support for WebP in major browsers:

As shown above, browsers like Safari and IE still don’t support WebP. However they support the other next generation formats like JPEG 2000 and JPEG XR. Anyway, we can always use the good old JPEG or PNG as the fallback.

According to HTTP Archive, as of November 2018, images make up on average 21% of a total webpage’s weight. For us being a social media platform, it’s even more than 60% of total weight. Therefore moving to a better optimized image format can offer significant benefits to us.

6. HTTP 3.0

HTTP 3.0 also referred as HTTP-over-QUIC is the upcoming major version of HTTP to exchange information over internet, which uses a new internet transport protocol called QUIC instead of TCP. It’s currently available as experimentation support in Chrome canary, Safari Technology Preview, cURL etc. and yet to become available with mainstream browsers and service providers.

The problems with HTTP 2.0 and how it will be gone with the upgrade, is well explained by CloudFlare here.

The role of TCP is to deliver the entire stream of bytes, in the correct order, from one endpoint to the other. When a TCP packet carrying some of those bytes is lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packet when the loss is detected. While doing so, none of the successfully delivered bytes that follow the lost ones can be delivered to the application, even if they were not themselves lost and belong to a completely independent HTTP request. So they end up getting unnecessarily delayed as TCP cannot know whether the application would be able to process them without the missing bits. This problem is known as “head-of-line blocking”.

Bonus Snack

Good job making it to the bottom of this article. Here is a whole wheat sandwich made in CSS for you.

.sandwich {
width: 100px;
height: 100px;
background-color: wheat;
border-radius: 20px;
}
/* since there are two kinds of people */.sandwich.peanut-butter {
content: "peanut butter";
}
.sandwich.jam {
content: "jam";
}

ShareChat, India’s trending 🔥 vernacular social media app with 100,000,000+ users.

--

--