Faster is not always better: benefits of slower interaction with algorithmic systems.

Joon Sung Park
ACM CSCW
Published in
6 min readDec 9, 2019

This blog post summarizes our work by Joon Sung Park, Rick Barber, Alex Kirlik, and Karrie Karahalios presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) that explores benefits of slowness in how people interact with algorithmic systems.

Photo by Pascal van de Vendel on Unsplash

The pace in human-computer interaction has long been a salient issue to both the users and the designers of complex computing systems. Studies have shown that shorter response times correlate with higher user satisfaction, productivity, and engagement. To this end, immense effort has gone into reducing system latency. We see the impact of this in many of the systems and algorithms that we interact with today. Search engines return query results in less than a second. Various recommender systems sort through thousands of relevant items in no time to return the most promising job candidates to recruiters and products to potential buyers. Matching algorithms instantaneously pair on-demand drivers to passengers while automated content curators constantly suggest the newest eye-catching contents.

However, there is a growing concern that this increasingly fast-paced digital environment that we inhabit might be toxic for our judgement-making abilities. We are quickly overwhelmed by the sheer quantity of information presented to us, and robbed of the opportunity to reflect on our decisions. But if fast computing systems lead to productivity and engagement, slow computing systems may lead to reflection and serendipity. Now more than ever, as algorithms make deeply personal and consequential judgements ranging from who will be released on bail to who will get preventive health care, the end users as well as other stakeholders could benefit from an opportunity to reflect on, and make conscious assessments of the judgements made in collaboration with computer systems and algorithms. The designers of these systems, in turn, need to design the interaction with the intention of enabling and encouraging such processes in mind.

In recent years, interest in slowing down technology has increased both inside and outside of academia. In her interview with Recode’s Kara Swisher, Google and Twitter veteran Nicole Wong called for a “slow food movement for the internet” while reminiscing about the era before behavioral advertising when algorithms put their effort into curating content that users would find useful. Similarly, ever since Lars Hallnäs and Johan Redström first discussed the merits of slowness in their seminal article, “Slow Technology — Designing for Reflection”, scholars in human-computer interaction and design have reevaluated the values surrounding waiting time as a moment of reflection, mental rest, and catalyst for serendipity (“slow search” presented by Jaime Teevan et al. of Microsoft is an excellent example of such effort). But the proponents of slow technology today envision that the potential of slowness will bear more urgent, real-world consequences like curbing the effects of technological bias through the encouragement of slowness — specifically when interpreting potentially biased judgements, and fighting against the behavioral advertising business-mode that optimizes for speed and engagement often at the cost of quality and relevance in the algorithmic outputs.

Our paper contributes to this line of work by focusing on the impact of an algorithm’s speed on how users incorporate the algorithm’s advice when making judgements in simple visual recognition tasks. Despite the increasing interest, slow technology has remained a work in progress with some scholars pointing out that simply spending more time to reflect may not lead to useful insights. But in a series of three studies (Study 1, n=140; Study 2, n=200; Study 3, n=32), we found evidence that users are better at assessing the quality of an algorithm’s advice if the speed of the algorithm is somewhat slow.

Jelly Bean Study.

Figure 1. The image of jelly bean bottle that was presented to the participants. Our algorithm (named ObjectRecognizer) displayed a loading sign during the estimation process as shown on the right.

In our studies, the participants were first presented with an image of a bottle full of jelly beans. They were then asked to estimate the number of jelly beans that were in the bottle. In the first phase of the study, our participants recorded their initial estimation of the number of jelly beans. Afterwards, the interface presented the user with an algorithmic prediction of the number of jelly beans in the jar. The participants then had the option to change their initial response and record their incentivized final response where the participants could earn a bonus payment if their answer was correct. Importantly, in these studies, we varied the response time and the quality of the algorithm that gave the participants the advice. The algorithm’s response time varied from 1 second to 75 seconds in study 1, whereas in studies 2 and 3, the fast algorithm’s response time was 1 second and the slow algorithm’s response time was 45 seconds. Finally, in all three studies. the “good” algorithm was remarkably accurate with only a 2% error rate, whereas the bad algorithm was overestimating the number of jelly beans by 100%.

Given an image of a bottle with around 500 jelly beans, almost all of our participants underestimated the number of jelly beans on average by around 200. This meant that in most cases, the participants were better off fully listening to the advice of the good algorithm and ignoring the advice of the bad algorithm.

A slow algorithm improves users’ assessment of the algorithm’s accuracy.

Figure 2. Figures summarizing the participants’ degree of adherence to the algorithm’s advice. With the good quality algorithm, the participants changed their initial response towards the algorithm’s suggestion when the algorithm was slower. With the bad quality algorithm, the participants changed their initial response towards the algorithm’s prediction when the algorithm was faster.

Our findings suggest that the participants were better at assessing the accuracy of the slower algorithm. In all three studies, when the algorithm’s accuracy was high (2% error rate), the participants who received suggestions from a slower algorithm with a response time of 45 seconds changed their initial estimation to a number significantly closer to the algorithm’s estimation than the participants who received advice from a faster algorithm with a response time of 1 second (p<0.02). Conversely, when the algorithm’s accuracy was low (100% overestimation), the participants who received advice from a slower algorithm with a response time of 45 seconds changed their initial estimation a lesser amount than those who received advice from a faster algorithm with a response time of 1 second (p~0.06).

A slow algorithm helped users to reflect on the task at hand, and enabled them to be more thoughtful when interpreting the algorithm’s outputs.

Our exit interviews with the participants revealed that the slowness of the algorithm gave them the opportunity to reflect on the algorithm’s estimation process. Moreover, the waiting time gave our participants an opportunity to be cognizant about the process of making collaborative judgements with an algorithm, helping them to avoid blindly trusting or distrusting the algorithm. For example, one of our participants who received advice from a fast, inaccurate algorithm noted:

I think once I saw the algorithm’s answer, I was more inclined to be like, that’s probably right. Whereas if maybe I had more time to think about my own answer, I would have felt more comfortable with mine and less inclined to just blindly adjust my answer compared to the algorithm’s answer… Because once I, once I made my guess and then I instantly see the algorithm’s then it’s like, oh, okay. (P20)

Similarly, another participant who received advice from a slow, inaccurate algorithm commented:

While I was waiting for the algorithm’s prediction, I kind of just was like thinking over my answer and… I decided like, okay, mine is more accurate before seeing the prediction and then after seeing the prediction, I think that time allowed me to I guess like reaffirm my prediction. (P27)

Overall, our work provides empirical evidence in a controlled setting that there can be benefits for users in slowing down the response time of algorithms. Certainly, slowing down our technology comes at a cost. But types of decisions that were once made by people, ranging from those in the justice system, the employment market, and the medical field are increasingly being made by algorithms. Perhaps for some of these decisions, users’ satisfaction, productivity, and engagement — some of the most widely used dimensions to evaluate our technology — might not be the right optimization measures. If slowness offers us an opportunity to make more thoughtful, and better decisions with algorithms, we may want to reimagine how we think about the pace at which we interact with our algorithmic systems.

Paper citation: Joon Sung Park, Rick Barber, Alex Kirlik, and Karrie Karahalios. 2019. A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 102 (November 2019), 15 pages. https://doi.org/10.1145/3359204

If you have questions or comments about this study, please contact Joon Sung Park at jp19@illinois.edu.

--

--

Joon Sung Park
ACM CSCW
Writer for

Currently pursuing M.S. @IllinoisCS in computer science. Semi-active oil painter. Formerly @StanfordHCI and @swarthmore. HCI, data and information systems.