Photo by Stephan Zabel/Getty Images

What We Learned Developing the Good News/Bad News Notifications

Some technical learnings about interactive notifications.

As we mentioned in our Round 4 post, we’re going to close down our experiments exploring the jobs report through good news and bad news. We’ll be continuing our experimentation with web notifications, and even with the jobs report —stay tuned for our next notification series.

The interactive notifications we built to for the jobs report were the first of the now five types of notifications we have tested. We’ve experimented with these interactive notifications over the last four months. As we prepare to close down this particular type of experiment, here are two things we learned.

Build more time to code complete around experiments.

One of the main technical challenges we’ve encountered with interactive notifications has been around ensuring the delivery of notifications while also randomizing the action buttons. Originally, in our interactive notifications, the action button position was consistent across all notifications. By randomizing their placement, we aimed to eliminate any bias that could impact each action button’s rate of interaction.

So we had to introduce “context” — an extra data store that persists between notifications. We generated the random 1 or 0, then stored it in the context object, for use by later notifications.

However, we found that randomizing the buttons had an adverse affect on the “show” rate of notifications on subscribers’ lockscreens. That is, for some of our subscribers, the initial alert to begin the Good News/Bad News series would just not appear.

Our thoughts on why: The Service Worker API specification states that a worker should only check for updates if the current code is over 24 hours old. A significant number of users for the month previous’ jobs report fell into a window where they didn’t have the “context” functionality downloaded, and thus the alert failed.

It is difficult to avoid this issue. Looking forward, our fix is to ensure that the code for our first notification is correct a full 24 hours before we send it. We prepared for the July jobs report with that in mind, and users received the alert successfully.

Introduce regression testing to the pre-experimentation routine.

Another technical issue was caused by a lack of regression testing before the experiment.

We have a small Node script that parses a spreadsheet with the notification data, and a module update caused it to execute incorrectly. The result was that a line break we entered between sentences in the spreadsheet, to make the notification easier to read, didn’t appear in the alert itself. Regression testing the script with last month’s data would have shown the error. However, we only tested with new data as part of our staging protocol before sending the notification after the jobs report data is released. Looking ahead, in cases where we are able to replicate experiments, the first step will be to replicate the last month’s experiment before introducing changes.

Interested in testing interactive notifications or have questions? Send us an email at

The Guardian Mobile Innovation Lab operates with the generous support of the John S. and James L. Knight Foundation.