What Are Users Thinking When They Dismiss Notifications?
Lessons learned from testing further actions and personalization in web notifications for the UK snap election.
After the UK and US elections of 2016, the Super Bowl and Olympics, we thought we were likely done with notifications experiments here at the Mobile Lab. But 2017 has held its share of surprises, and one came in the form of the UK snap election held on June 8.
We decided to run one (likely last) notifications experiment. We had experimented with data on the UK’s constituency votes once before, around the E.U. referendum in May 2016, and we also knew that the Guardian visuals team would be publishing an interactive, which we could work with as we did for our experiment during the US presidential election. For the snap election, we decided to experiment using some of the new image capabilities made available by Google to create a new type of web notification for Android phones. (Alastair Coote, a lab developer, recently wrote about the technical aspects of Google’s expanded image formats.)
We built two types of web notifications: a live data alert with a “switch view” feature and a constituency alert that updated a user when the vote was called for a specific voting area to which they had subscribed. Both are described in more detail below.
Two types of alerts and what they looked like:
We have tested live data notifications previously, both as web notifications and through the Guardian iOS and Android apps. For our live data notifications for this experiment, we retained many of the previous features of the earlier notifications, including data visualizations within the notification, as well as the way they updated automatically. We also added in a few new features:
First, we ran an alert for the national results, which contained live updating vote data. We ran it on the main Guardian domain, rather on than the Guardian Mobile Lab site, gdnmobilelab.com, off of which we had previously hosted our experiments with web notifications. This allowed us to serve a bigger audience and also to make the experience a more seamless one. The live data itself came from feeds set up by the Guardian visuals team, who built the full live results page, to which the alert was linked.
We added a new feature, which we called a “switch view.” When a user tapped the action button (one of two) labeled “Latest Declared,” the notification would flip to a second visualization of the most recent voting precincts to be called, shown in reverse chronological order. Tapping on the button again, now labeled “Back,” would show the user the initial screen of the overall tallies of the national results.
In addition, we also ran a second type of notification which we called constituency alerts. (A constituency in the UK electoral system is a geographic area that comprises a voting bloc, somewhere in size between a voting district and a state in US politics.)
Users who accessed the full live results page (on either desktop or mobile) could subscribe there to receive a separate notification when the result of a certain constituency was called. Users entered the post code or first few letters of the name of the constituency and tapped subscribe. When the constituency was called, they would receive a result notification on the platform on which they subscribed.
What we wanted to know
For this experiment, we were looking for a few key insights to add to what we’d learned from previous experiments. First, we were looking for signals about the right level of alert customization by offering the constituency alerts. We were also looking for better ways to display data visualization in the national results alert. Alongside those, we wanted to know if people had an appetite for getting additional data-rich information in an alert, updating in real time, that was easy to switch to.
For the constituency alerts in particular, because they were so localized, we wondered if constituency-level information in alerts would resonate and be useful, and if there was an appetite to subscribe to that level of customized information.
How they did
We sent alerts to 20,172 subscribers. Of those, 57% of users were in the UK, followed by the US and Australia, reflecting the three primary countries the Guardian serves. Of these, 11,077, or about 54%, were on mobile. Of the total users, we know that about 20% (or 4,030) subscribed to the constituency alerts, while around 75% — 16,000 users — subscribed to the national live data alerts. That 16,000 number is slightly soft because we had multiple ways to sign up for the national alert, and users may have received them on multiple devices, which may mean a slight inflation of the number. The rest of this writeup will focus on the mobile user experience.
Following the alerts, we sent a survey through a notification. It was completed by 1,881 respondents.
What we learned
More than half of our users engaged with the alerts at least once… For the national alerts specifically, more than half of the users proactively engaged with the alerts by switching the alert views or tapping the alert.
…And of those who engaged, the average number of taps per user was nearly 15. We updated the alert each time the data feed registered a change, which amounted to several hundred updates over the course of the vote tallying. Those users who did engage tapped an average of 14.8 times each, on an average of 5.6 alerts over the course of the event.
For this real-time event, more users signed up from the live results page than from any other method. This was a change. Historically, we have had three methods of recruiting users to participate in our web notification experiments: with an article written by the lab that goes up a few hours to a day ahead of the event; in a link in a related live blog; and via invitations on Twitter and other social media from Guardian US and other accounts. For this experiment, we were able to include subscription buttons on the Guardian’s live results page. We ended up gaining a majority of overall users from those buttons. It was also the only place users could sign up for constituency alerts, suggesting that perhaps keeping sign-up for live data alerts close to the data or event coverage itself is a strong recruiting channel on the day of the event.
Users used the ‘switch view’ functionality built into the national alert. We registered 31,171 taps on switch views, or about 2 per user, 98% of which were on mobile. In our survey, 83% of users who said they had switched views also found the feature useful, though some did additionally note that, compared to a live broadcast on the BBC the alerts “felt slow.” Users who did use the switch view had overall higher alert interaction rates by about 10% as compared to other users.
Users told us why they closed notifications: they had the information they needed. We’ve asked participants in a few previous surveys why they closed their notifications, and their responses have started giving us a clearer look at their motivations for closing an alert. Of the survey respondents who answered this question, 84% said they closed notifications because “they were informed and didn’t need it anymore.”
Most of our national alert engagement came at the end of the votes being called, rather than at the beginning. Specifically, the vote totals for the last five seats to be reported, versus the first five seats, prompted the most engagement. Taps on the national alert updates for the final five seats accounted for 17% of all of the national alert updates that were interacted with, while taps on updates to the national alert for the first five seats accounted for 7%.
Local time likely played a role here: the first seats were called in the UK starting at around 10:30pm GST, relatively late in the evening when users may have been asleep, whereas the final seats were not called until the following afternoon GST.
We confirmed that adding the time the results were refreshed to these alerts, something users had told us previously they wanted, made the alert functionality clearer. After previous experiments, subscribers told us they would have liked to know how recently the results had been updated. We added a language that said “seats declared as of…” and a timestamp as way to indicate this. In the survey, users told us it was clear the alerts were live updating (to the tune of 94%), and 45% said it was specifically because of the language we added.
Users are curious. They will press all the buttons. One somewhat extraneous observation, but one that we like: In building web notifications, one action button is always dedicated to accessing the browser’s settings, even though that button is unrelated to our experiment. People pressed it anyway. 70% of survey respondents who said that they had pressed it said they were just looking to see what it would do.
Longform responses to the survey yielded some trends. We’ve written about them before, but much of our analysis is made possible by the great work of Maass Media. On our behalf, the Maass Media team has experimented with applying an NLP algorithm to analyze the sentiment of the freeform survey responses from users. The results were interesting.
The majority of positive responses were about the efficacy of live alerts, particularly from users who said they would not have been able to access the data as easily without our notifications. The majority of the negative responses were about about our users’ perception that our alerts may have caused their browser to crash. While it’s unlikely that the alerts caused large scale crashes, and there could have been many other issues contributing to performance issues people saw on their phones that night, including individual connectivity or by receiving an unusually high number of alerts.
This NLP work was buoyed by the fact that it is reusable for similar experiments. For their analysis of the survey responses for this experiment, Maass Media reused the algorithm from a previous notification experiment, based on words they had already coded as having positive and negative sentiments, and then re-applied the algorithm to these new responses. Having a reusable algorithm makes the upfront work of setting it up longer lasting, which may help newsrooms decide to take the plunge.
That said, when asking for qualitative feedback from users, consider where you want to invite open ended insights. We included more freeform “other” response options than we had in previous surveys, and were left with an unintended dilution of insights from a few questions. For larger events, the crispest surveys with the most actionable results pose required questions that offer specific options for feedback.
Constituency alerts provided a local look at the elections data. About 25% of our overall users signed up for a constituency alert. Users could sign up to find out when specific precincts were called and the results. 76% of survey respondents who said they signed up for a constituency alert said they found it useful, while 85% said would definitely or maybe sign up for it again.
As mentioned above, this will likely be our last notification-specific experiment. We’ve learned a lot, and will be working to distill these learnings over the coming months as we experiment with other forms of mobile storytelling.
For now though, we’ll take a second to extol the benefits of increased personalization in notifications. As each of our notification experiments has shown, if you provide users with an easy sign-up for a subject of notification, an equally easy way to opt out, and a functional layout, live and event-specific notifications are a useful and appealing way to connect users to real-time data and information. Furthermore, asking users to input specifically what they are looking for can be an important way to make them feel connected to their news organizations. (With only one sign-up point, over 4,000 users signed up for constituency alert specific to a post code, a seriously granular level of information!)
As new technical capabilities arise, there will be better ways to directly prompt users to ask for what they want. We hope our experiments show that users will be ready if only we will meet them there.
The Guardian Mobile Innovation Lab operates with the generous support of the John S. and James L. Knight Foundation.