The Ripple Effect of Negativity Bias

A few takeaways for “User Research”

Bibhu Kalyan Nayak
bkcreatives
3 min readOct 12, 2023

--

In the intricate world of scientific research, evaluating and selecting proposals is pivotal, determining which projects see the light of day. A recent study has unveiled a phenomenon that can significantly influence this process: the sway of negative information from peer reviewers.

The Power of Negativity Bias

Researchers Lane, Teplitskiy, Gray, Ranu, Menietti, Guinan, and Lakhani, in their 2019 study, discovered that evaluators were more likely to lower their scores after exposure to negative scores from other reviewers [1] [2]. This effect was especially pronounced when evaluators encountered “lower scores” from their peers than when they observed “higher scores” of similar magnitudes.

Moreover, this negativity didn’t just influence scores. After viewing lower scores from peers, evaluators delved deeper into finding additional limitations, weaknesses, and problems with the proposal [2].

Implications for User Research

The findings of this study resonate deeply with the domain of user research. User researchers often rely on feedback from multiple evaluators or testers when assessing products or interfaces. The potential for negativity bias means that if early feedback is overly critical, subsequent evaluators might be influenced to be more negative in their assessments, even if they might not have been so without that initial exposure.

This can have significant implications for product development. A product or feature might be sidelined or significantly altered based on more negative feedback than it might have been in a different context. It underscores the importance of structuring user research to minimise the potential for such biases.

Towards More Objective User Research

To counteract the potential pitfalls of negativity bias in user research, several strategies can be employed:

1. Blind Evaluations: Just as in scientific proposal evaluations, user researchers can consider blind feedback processes, where evaluators or testers don’t see feedback from their peers.

2. Structured Feedback: Clear guidelines and training can help evaluators provide balanced feedback, focusing on strengths and weaknesses.

3. Diverse Panels: Ensuring a diverse group of evaluators can help get a more balanced perspective, reducing the potential for groupthink or shared biases.

Conclusion

While focused on scientific proposals, the study by Lane, Teplitskiy, and their team offers invaluable insights for the user research domain. Recognizing the potential for biases and structuring feedback processes accordingly can lead to more objective and useful results, driving better product development and innovation.

References:

[1] R. Smith, “Peer Review: A Flawed Process at the Heart of Science and Journals,” Journal of the Royal Society of Medicine, vol. 99, no. 4, pp. 178–182, Apr. 2006, doi: 10.1177/014107680609900414.

[2] M. Moussaïd, H. Brighton, and W. Gaissmaier, “The amplification of risk in experimental diffusion chains,” Proceedings of the National Academy of Sciences, vol. 112, no. 18, pp. 5631–5636, Apr. 2015, doi: 10.1073/pnas.1421883112.

[3] J. N. Lane et al., “Conservatism Gets Funded? A Field Experiment on the Role of Negative Information in Novel Project Evaluation,” Management Science, vol. 68, no. 6, pp. 4478–4495, Jun. 2022, doi: 10.1287/mnsc.2021.4107.

--

--