Fighting Ebola with JavaScript

Patricia Garcia explains how shrewd use of technology helped tackle an emergency situation in a developing country

Illustration by Ben Mounsey

In September 2014 I joined the software team of an NGO working at the intersection of tech and public health. By then the Ebola outbreak in West Africa had reached epidemic proportions, with thousands of cases reported. One month before, the outbreak had reached Nigeria, where the NGO had provided phones and a custom Android application that helped reduce the reporting time for suspected cases from 12 hours to almost nothing. Now it was the time to bring that expertise and those tools to the most affected countries: Guinea, Sierra Leone and Liberia.

The challenges

The Ebola virus is non-airborne and contagion requires direct contact with the bodily fluids of an infected individual. To control the outbreak, infected persons must be isolated and everybody who has come into contact with them must be kept under observation for the 21 day-long incubation period. This requires the collection of an enormous amount of data, and it is critical that the information collected is reported promptly so sick persons can be treated and proper quarantine implemented.

This is a scenario where technology shines. As wellas speeding up the transfer of data considerably, even a simple application based on web forms can have a positive impact. For example, things like making certain fields mandatory, or letting users choose from a set of options in a drop-down can greatly improve the quality of the data collected.

However, while the technology solutions required are simple, implementing them in the context of a critical emergency situation in a developing country makes things much more challenging. Shipping early is not a nice-to-have but a must — there is no excuse for taking a long time to deliver a feature or not fixing a bug when human lives depend on it.

More important than that, internet connectivity is a scarce resource that can’t be counted on. Even when broadband internet is available, it is not unusual for it to stop working for hours or days. Designing the applications to be offline-first is a necessity.

Because of these constraints, we chose CouchDB as our central piece of software. CouchDB has been designed with distributed systems in mind, making it perfect to use in an offline context, where a local database is contained in each device. Offering an HTTP REST API, it became not only our database but our backend, too. In the frontend, PouchDB offers a JavaScript database that can store data locally while offline, then synchronise with the CouchDB backend when back online.

Scaling offline applications

With the combination of CouchDB and PouchDB we were able to move from zero to production in a week in two of our most important applications, and with a relative simple and clean codebase. This worked very well in Nigeria, where the scale of the outbreak was relatively small. However, the complexity got higher as the amount of data and number of users grew.

In offline applications with a high amount of data that needs to be accessed and modified by many users, two significant problems appear.

First, synchronising the remote database with the local one (especially initial synchronisation) gets increasingly slower. On top of that, browsers have a limit for the amount of data that can be stored locally. For example, the newest Firefox and Internet Explorer have soft limits of 50 and 10MB respectively, after which the user is asked to authorise the application to use more space — not an ideal experience. The most permissive browser is Chrome, allowing a single application to use up to 6.66 per cent of the total free space on the hard drive. However, this might still cause problems when dealing with low-cost mobile devices.

Lack of local space can have even more dramatic consequences. In general, when the available local space is full, the browser starts clearing out data based on an LRU policy. That means the data from the application used most recently is deleted, losing all changes that haven’t been synchronised to the remote database.

The second major problem occurs when more than one user modifies a document while both their devices are offline. When the local databases synchronise with the remote one, CouchDB will detect both versions. As it has no way to know which is right, it will just choose one and mark the other as a conflict.

In offline applications with a high amount of data that needs to be accessed and modified by many users, some significant problems appear: synchronisation speed, local data storage and document conflicts


Fortunately, most of the time these two problems can be addressed by knowing our application and our users. To solve the data issue, we need to ask ourselves what data must be locally available for the application to work. Often that is a relatively small dataset, saving us from the problems of maintaining a big local database.

Where some functionality requires a very big part of the database to be replicated locally, we must consider if that functionality should be available at all times, or if we can make it available only when access to the remote database is possible.

In one of our applications we found that some users did require access to all the data at all times. However, these particular users were all working in an office. We realised what we really needed was not an offline-first application, but an offline-first architecture. We set up a local CouchDB database in each office, with the local and remote databases synchronising when the connection was working.

In the case of document conflicts, it is possible to resolve this issue programmatically, but with the right data architecture we can avoid it altogether. The question to ask is: which data is going to be updated together and which data is going to be updated separately?

Personal data about a suspected case will seldom be edited, but data about their possible symptoms will be added every day, often by different persons. If we keep all the information in the same database document, conflicts could be quite common, but if we maintain a separate document for the personal data, and each day add a new document with the symptom information, much of the risk of conflict disappears.

The Aftermath

More than two years after the Ebola outbreak began, there is finally reason to celebrate. Although it is very difficult to say for sure all cases have been identified, and therefore that the outbreak has ended completely, the epidemic is finally under control and people can return to their normal lives.

In the 15 months on the project I discovered the big difference technology can make in a situation like that. I also learned in emergency situations, you need to be pragmatic in your technical decisions.

Keeping in constant contact with the situation and your users is vital, to make sure you understand the problem and their needs. Although we had team members spending time on the ground in affected countries, when we were working from our office in Berlin it could have been all too easy to lose sight of the context: an emergency situation in a developing country.

This is true in every industry perhaps, but more true than ever here. Without constant detailed feedback from the field, one can provide beautifully engineered solutions, but not useful ones.

A software developer with over 10 years’ experience, Patricia has a passion for working in open source and impact-driven projects

This article originally appeared in issue 280 of net magazine