Going Offline with Web Applications, Part 1: Requesting, Storing, and Using Data
When we’re building web applications at Cognite, we often run into a big challenge: How do we present data to users in extreme working environments where internet connectivity is an issue?
In order to improve the mobile experience for such users, we started to look into offline support capabilities powered by service workers. Service workers give web apps the ability to intercept and handle network requests, including programmatically managing a cache of responses. By using service worker caching, we can provide users with a seamless app experience while transitioning between online and offline states.
From the office to the field
Users of our app, InField, are offshore field workers who perform maintenance on equipment. They start their day in the office, creating checklists of equipment to work on while out in the field. Each item on the checklist contains valuable information about the equipment, including documents, time series data, work orders, work permissions, and the location of the equipment on a 3D model of the installation.
Before going out in the field (and while still connected to a wireless network), the user selects a checklist they are going to work on and taps the “Enable offline mode” button on their smartphone. Behind the scenes, InField caches all the data associated with each item on the checklist so the user will be able to access it even without internet connectivity.
Let’s deep dive into how InField handles this functionality.
Requesting data
Our back end provides a normalized industrial RESTful API. In order to fetch data related to the checklist items, thousands of requests must be sent to the API. While modern browsers support only a few parallel requests at a time, running such a big number of requests would normally freeze the main thread. This would prevent the user from continuing to interact with the app. To avoid this, we introduced a caching service that executes the requests on a client’s behalf and returns results in chunks.
So what happens under the hood when the user taps that “Enable offline mode” button on their phone?
- A client executes GET /cache/requests to the caching service, providing a checklist ID for a checklist to be cached.
- The caching service prepares a list of thousands of distinct requests by resolving the checklist ID against the API and sends requests back to the client to be executed later.
- The client then compares the list of requests and removes the ones that are already satisfied by the local cache.
- The client executes GET /cache/responses request to the caching service with the requests absent from the cache.
- The caching service executes all the provided requests to the API and returns the request-response pairs back to the client.
- The client caches the pairs in the client-side storage.
Storing data
When a client receives data from the caching service, it needs to store it in the client-side storage. We use IndexedDB with the localForage library as a wrapper around the IndexedDB API, which provides a nice Promise-based syntax similar to the localStorage setItem and getItem methods.
All client-side storage solutions, including IndexedDB, come with storage limits on mobile platforms. At the moment, the limits are 1 GB for iOS and 2 GB for Android (with a group limit per domain). The average checklist may contain hundreds of MBs of data and since we are pretty limited in the amount of resources we can cache we should think thoroughly about data deduplication.
We store data response values under correlated normalized request keys in the following manner:
- /v1/assets/ids/<assetId>
- /v1/3d/models/<modelId>/revisions/<revisionId>
- /v1/timeseries/datapoints/<timeseriesId>
Request keys are not actual API URLs but rather normalized keys. This is done in order to unify both GET requests with query parameters and POST requests with body payload with the normalized view, and also to solve the problem of request parameter ordering. We use version namespaces to be able to handle future API-breaking changes.
Using cached data
In order to query the API, we rely on SDKs that use some handy methods to reduce boilerplate. To use cached data, we decided to extend the functionality of the SDKs on the app side, which required wrapping the publicly available methods into a Proxy object with the following code:
If a user is online, the normal SDK method will call the API in order to retrieve results. When connectivity becomes an issue, cached data will be returned from IndexedDB. This creates a seamless app experience for our users, no matter how many bars they have.
Modern web technologies come with a lot of great features. Service workers is one of them, and at Cognite they have helped us provide users with the ability to continue executing critical jobs even when connectivity is an issue.
Service workers aren’t perfect, though. In our next post, we’ll talk about some of the limitations and gotchas such as service worker scoping and domain redirects we encountered on our offline support journey.