Offline with Service Workers

Anthony Good
DAZN Engineering
Published in
4 min readOct 9, 2017

Service workers are a new browser technology which allow you to proxy network requests. Essentially this means intercepting requests and returning a response depending on some logic you want in place.

The main reason anyone would ever want to ponder such a thing is that it’s the first step in creating an app which can function 100% offline.

Service workers are entirely asynchronous by design, and run in a separate CPU thread from the main javascript of your page. Because of this, their logic needs to be served in a separate file, and can’t use synchronous APIs.

There are three types of event which comprise the service worker lifecycle:

  1. install — fired after the worker has been registered and the script downloaded by the browser.
  2. activate — fired after installation, once the service worker becomes active. Bear in mind if there’s a previous version of a service worker active (for instance, being used by another tab or window) then the latest service worker won’t become active until the old one has been ‘released’. If this displeases you, you can skipWaiting in the installation phase.
  3. fetch — fired whenever a network request is made to a URL within a given service worker’s scope.

If we wanted to make our application available offline, we might use these lifecycle events like so:

  1. install — create a cache of the resources we want to use offline (even including the current page).
  2. activate — clean up any old cache(s) if there may have been previous versions of a service worker installed.
  3. fetch — if we’re connected, we make the request, cloning the response into the cache for later, and return the response. If we’re offline, we return the resource from the cache instead.

This strategy is known as “network first”, because we always use the network response when it’s available. But there are many other strategies available, such as “cache only”, or “cache first” — where you return the resource from the cache (very fast) and update the cache in the background (possibly slow) — or the excitingly named “cache and network race”, where you race a cache-read against a network request and return whichever comes back first.

But the key ingredient to all these approaches is proxying network requests with service workers.

So how could we implement our app-caching service worker?

First, we create a service-worker.js file (or whatever your heart desires) with an install hook:

When this service worker is installed, it opens a cache (‘cache-v2’ in this case) and saves all the listed resources in the cache.

But we also need to register the service worker in our main application code. Registration can be as simple as:

Note that we’re not passing a scope option here, so the default scope will be applied. This is equivalent to the following registration signature: register('/service-worker.js', { scope: './' })). The scope defines which URLs will be proxied by the service worker, and by default this is all URLs at or underneath the current page.

At this point you should be able to inspect the ‘cache-v2’ created in the browser’s Cache Storage.

Inspecting the cache in Chrome’s dev tools.

Since we’re creating a cache, we should prepare for the day when we want to tidy up after ourselves. We’ll add some logic for cleaning up redundant caches:

A little more complex: we grab all the current cache keys available to this origin, filter out the current (active) cache, and delete everything else. We pass our array of promises to Promise.all to get a single promise which only resolves once all the deletions have completed. We finally pass this mother-of-all-promises to event.waitUntil, which prevents the installation phase closing out before its operations have finished.

You might find the use of bind a bit odd here — first with caches.delete.bind(caches) and then shortly after with Promise.all.bind(Promise). This is because caches.delete and Promise.all are actually methods (ie. they belong to a class or instance). As such their implementations rely on the this keyword, which won’t point to anything if we pass an unbound function reference to map or then.

In this case, we might prefer to instead pass anonymous functions and call those methods directly on the objects to which they belong:

The final piece of the puzzle is hooking into the ‘fetch’ event —

Simple: we declare a responseOrCache function which tries to fetch the resource — if the fetch request fails (eg. if we’re offline) then the rejected promise is caught and we get the resource from the cache instead.

A fuller implementation might involve updating the cache item whenever the request is successful. For that, we might add a function to add a single resource to the cache:

Notice we clone the response, since response objects are intended to be consumed once only.

Now we can use our new function in our ‘fetch’ listener:

The beauty of service workers is that, because they transparently trap network requests, the rest of your application doesn’t need to care about caching or special cases for loss of connectivity — instead your application can focus on how it should display or manipulate the resources it requests, paying no attention to whether they come from the cache or the network. Or, to put it another way, your caching logic is completely decoupled from your application logic.

--

--