Optimizing JS for Native-Like Webviews

Leo Jiang
Leo Jiang
Mar 26 · 6 min read

We at Lime recognized early on that one of the main bottlenecks in our product development process is the mobile release cycle. For both Android and iOS, it can take over two weeks for native code changes to propagate to users. We publish batched changes weekly, then it takes a few days for Google or Apple to approve our changes and an additional few days for users to update their apps. To speed up our development process, we explored building mobile experiments using HTML/JavaScript and running them in a webview framework. This way, our app would load web pages from our servers, which we can update in just minutes.

When designing this platform, our main challenge was loading web pages quickly enough to feel native. Our platform loads files from the Internet, which is slower than using local native code. Other apps usually display an empty screen while loading, but we wanted to avoid this. We also had other challenges such as making the UI look native and authenticating users seamlessly, but these were relatively easy to address. This article will explain how we optimized our web stack to serve content fast enough to feel native.

User Flow

Here’s an overview of how our webview platform works:

  1. A user navigates to a webview page
  2. The app creates a webview
  3. The app sets cookies in the webview containing the user’s auth token, language, location, etc.
  4. The webview loads an HTML file from a CDN
  5. The CDN runs VCL scripts to fetch an HTML file from S3, based on the user’s language
  6. The webview renders the initial screen using the HTML and embedded critical CSS
  7. The webview loads external dependencies: CSS, JS, images, and fonts
  8. Preact (lightweight React) runs and updates the DOM (there should be no changes)
  9. JS uses the auth token in a cookie to fetch data from our Rails server
  10. Preact re-renders based on the fetched data

After Preact runs, the webview should become nearly indistinguishable from a native page. However, users won’t see any content until the initial HTML render and they can’t interact with the page until JS runs. In contrast, for native pages, both of these steps are nearly instantaneous. To deliver a native-like experience, we managed to significantly reduce the loading time for both the initial screen and the JS files.

Rendering Initial Screen Quickly

It’s much faster to render content using static HTML and inline CSS than waiting for JS to load. We could write raw HTML and use a DOM manipulation library, but Lime’s other frontends are all React or Vue, so we decided to do the same here. To generate static HTML with a JS UI library like React, we’d need either server-side rendering (SSR) or a static site generator (SSG). SSR can be slow with high traffic unless we invest significantly in our infrastructure, so we went with SSG.

We weren’t happy with the existing SSGs because they weren’t optimized enough for speed. Therefore, we built our own SSG using preact-render-to-string. During the build process, we use EJS templates to generate a separate static HTML file for each experiment and each language. For example, we’d have limepass-en.html, limepass-es.html, loyalty-en.html, loyalty-fr.html, etc. We also use PurgeCSS to embed critical CSS. Then, we upload the static HTML files to S3.

Since our mobile clients don’t know which languages are available for each experiment, we use Fastly CDN’s VCL scripts to dynamically serve the correct HTML file based on the user’s language. During the build, we upload the supported languages to Fastly’s key-value store.

SSG lets users see the initial screen much sooner than using JS to render it. However, an issue with SSG is that users still need to wait for JS to load before seeing dynamic content. We render placeholders in our static HTML files and progressively replace them with dynamic data. For example, the price for our LimePass experiment can be different for each user, so we can’t include them in our static HTML file.

Fully rendered after JS loads:

Static HTML:

On our office network, the browser downloads the HTML file and renders it in 150ms. If we rely on the CSS and JS files to render content, it would take 500ms.

Loading JS Quickly

Before the page can load dynamic data or be interactive, JS must be loaded. To load JS quickly, there are several things we can do:

  • Have fewer JS files
  • Make JS files smaller
  • Transfer JS files faster

The normal way to build JS files is to have one entry point which conditionally loads other JS files as “chunks”. However, this is slower because of the extra network requests. Instead, we built a single JS file for each experiment, with all the critical dependencies included, so only one JS network request is needed.

A typical build would produce:

js/  chunks/    en.js    es.js    fr.js    limepass.js    loyalty.js    main.js

First, the browser loads main.js, which then loads a language chunk and an experiment chunk. Our output files look like:

js/  limepass-en.js  limepass-es.js  loyalty-en.js  loyalty-fr.js

All the JS a page needs is included in a single file. We still use chunks for non-critical dependencies such as Amplitude logging, which we load after Preact renders.

There are nearly endless amounts of things you could do to reduce JS file size, so we’ll provide a brief summary here and share more details in a separate blog post. Our build uses Webpack, Babel, and SASS.

⠀⠀⠀⠀⠀E.g. co‎‎‎‎re-js’s Symbol polyfill was 3.5kB, but all we needed was its constructor, so we aliased it with a 30B polyfill

⠀⠀⠀⠀⠀E.g. re-implemented some Lodash functions that were adding 25% to the bundle size

  • Disabled async/await

⠀⠀⠀⠀⠀Async/await polyfills are huge, the trade-off between code cleanliness and file size isn’t worth it because our experiments are simple

⠀⠀⠀⠀⠀E.g. many modules weren’t treated as ES modules, which we fixed

As a comparison, an empty page generated with Gatsby loads 5 JS files totaling 300kB, while our framework would load a single 40kB JS file.

We use a CDN which handles most delivery optimizations for us. It uses low latency infrastructure, serves files from the geographically closest servers, and uses HTTP/2. In addition, we added Brotli support by uploading Brotli-compressed files to S3. A VCL script would serve the Brotli version if the user’s browser supports it.

Results

We’ve successfully launched several experiments on our webview platform. We were able to build, launch, and iterate on experiments much more quickly than building them natively. On average, it takes about 3 minutes for webview changes to reach users, compared to 2 weeks for native apps.


Making the content load quickly was crucial for making the experiment feel native. We can still make a lot of performance gains on the mobile or infra sides, such as pre-fetching or better caching. If you’re interested in working on challenging problems at Lime, check out our career page.

Lime Engineering

Lime‘s Engineering Blog for Software Engineering and Data…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store