When designing this platform, our main challenge was loading web pages quickly enough to feel native. Our platform loads files from the Internet, which is slower than using local native code. Other apps usually display an empty screen while loading, but we wanted to avoid this. We also had other challenges such as making the UI look native and authenticating users seamlessly, but these were relatively easy to address. This article will explain how we optimized our web stack to serve content fast enough to feel native.
Here’s an overview of how our webview platform works:
- A user navigates to a webview page
- The app creates a webview
- The app sets cookies in the webview containing the user’s auth token, language, location, etc.
- The webview loads an HTML file from a CDN
- The CDN runs VCL scripts to fetch an HTML file from S3, based on the user’s language
- The webview renders the initial screen using the HTML and embedded critical CSS
- The webview loads external dependencies: CSS, JS, images, and fonts
- Preact (lightweight React) runs and updates the DOM (there should be no changes)
- JS uses the auth token in a cookie to fetch data from our Rails server
- Preact re-renders based on the fetched data
After Preact runs, the webview should become nearly indistinguishable from a native page. However, users won’t see any content until the initial HTML render and they can’t interact with the page until JS runs. In contrast, for native pages, both of these steps are nearly instantaneous. To deliver a native-like experience, we managed to significantly reduce the loading time for both the initial screen and the JS files.
Rendering Initial Screen Quickly
It’s much faster to render content using static HTML and inline CSS than waiting for JS to load. We could write raw HTML and use a DOM manipulation library, but Lime’s other frontends are all React or Vue, so we decided to do the same here. To generate static HTML with a JS UI library like React, we’d need either server-side rendering (SSR) or a static site generator (SSG). SSR can be slow with high traffic unless we invest significantly in our infrastructure, so we went with SSG.
We weren’t happy with the existing SSGs because they weren’t optimized enough for speed. Therefore, we built our own SSG using preact-render-to-string. During the build process, we use EJS templates to generate a separate static HTML file for each experiment and each language. For example, we’d have
loyalty-fr.html, etc. We also use PurgeCSS to embed critical CSS. Then, we upload the static HTML files to S3.
Since our mobile clients don’t know which languages are available for each experiment, we use Fastly CDN’s VCL scripts to dynamically serve the correct HTML file based on the user’s language. During the build, we upload the supported languages to Fastly’s key-value store.
SSG lets users see the initial screen much sooner than using JS to render it. However, an issue with SSG is that users still need to wait for JS to load before seeing dynamic content. We render placeholders in our static HTML files and progressively replace them with dynamic data. For example, the price for our LimePass experiment can be different for each user, so we can’t include them in our static HTML file.
Fully rendered after JS loads:
On our office network, the browser downloads the HTML file and renders it in 150ms. If we rely on the CSS and JS files to render content, it would take 500ms.
Loading JS Quickly
Before the page can load dynamic data or be interactive, JS must be loaded. To load JS quickly, there are several things we can do:
- Have fewer JS files
- Make JS files smaller
- Transfer JS files faster
A Single JS File Per Page
The normal way to build JS files is to have one entry point which conditionally loads other JS files as “chunks”. However, this is slower because of the extra network requests. Instead, we built a single JS file for each experiment, with all the critical dependencies included, so only one JS network request is needed.
A typical build would produce:
js/ chunks/ en.js es.js fr.js limepass.js loyalty.js main.js
First, the browser loads
main.js, which then loads a language chunk and an experiment chunk. Our output files look like:
js/ limepass-en.js limepass-es.js loyalty-en.js loyalty-fr.js
All the JS a page needs is included in a single file. We still use chunks for non-critical dependencies such as Amplitude logging, which we load after Preact renders.
Optimizing JS File Size
There are nearly endless amounts of things you could do to reduce JS file size, so we’ll provide a brief summary here and share more details in a separate blog post. Our build uses Webpack, Babel, and SASS.
- Used Google Closure Compiler for compression
- Used Spritesmith and inline SVGs to have at most one image file per page
- Wrote a css-loader extension to generate shortened CSS classes
- Used Webpack Bundle Analyzer
⠀⠀⠀⠀⠀E.g. re-implemented some Lodash functions that were adding 25% to the bundle size
- Disabled async/await
⠀⠀⠀⠀⠀Async/await polyfills are huge, the trade-off between code cleanliness and file size isn’t worth it because our experiments are simple
- Used Webpack’s optimizationBailout
⠀⠀⠀⠀⠀E.g. many modules weren’t treated as ES modules, which we fixed
As a comparison, an empty page generated with Gatsby loads 5 JS files totaling 300kB, while our framework would load a single 40kB JS file.
Transferring JS Files Quickly
We use a CDN which handles most delivery optimizations for us. It uses low latency infrastructure, serves files from the geographically closest servers, and uses HTTP/2. In addition, we added Brotli support by uploading Brotli-compressed files to S3. A VCL script would serve the Brotli version if the user’s browser supports it.
We’ve successfully launched several experiments on our webview platform. We were able to build, launch, and iterate on experiments much more quickly than building them natively. On average, it takes about 3 minutes for webview changes to reach users, compared to 2 weeks for native apps.
Making the content load quickly was crucial for making the experiment feel native. We can still make a lot of performance gains on the mobile or infra sides, such as pre-fetching or better caching. If you’re interested in working on challenging problems at Lime, check out our career page.