Better Decisions by Design: Applied Behavioral Science

Artefact
Artefact Stories
Published in
14 min readSep 21, 2016

Every day whether we realize it or not, our decisions are influenced by our past experience and many details of the way our options are presented, biasing us one way or another. There is a wealth of research cataloging these biases but it can be hard to find and apply. Take health: Most of us have faced a difficult healthcare decision, whether for ourselves or a loved one. It can be overwhelming. We have to balance a variety of objective factors like the odds of success with other considerations like side effects or costs. Research shows that accurate risk perception is challenging, and prior healthcare experiences heavily bias our attitudes.

Imagine you are diagnosed with cancer. For a while the shock keeps you from remembering half of what your doctors tell you. It doesn’t matter how smart or educated you are — the clock is ticking and there’s no time to truly become an expert. You’re still learning about your condition when you realize you need to rearrange your life around the intense treatment schedule. Then you’re thrust into a series of difficult decisions you have no experience making. Which hospital and care team to choose? Surgery first or chemotherapy? Lumpectomy or mastectomy? Brand name or generic drugs? Some people face even tougher choices, like choosing between another round of chemotherapy with uncertain benefits or dying in more comfort. There are some data to inform these decisions but there are two problems. First, we often have biased interpretations of the data. Second, there may not be a straightforward algorithm to arrive at the “correct” answer. This is the challenge we set out to solve, together with Group Health Research Institute, with project SIMBA:

Can we design a decision aid that gives us the information we need and counters our biases so that we end up more knowledgeable and confident in our preference?

Problem 1: We are biased

People can interpret the same number differently depending on its phrasing, its presentation as text or a graphic, or even the orientation of the graphic. Their judgment can be heavily biased by past experience or what others have done. Researchers believe there are often useful evolutionary reasons for our biases but they can result in inaccurate interpretations unless we carefully present information to avoid them. Doctors have the same biases despite their expertise. Decades of research in cognitive psychology and behavioral economics have revealed these biases and even shown ways to mitigate them. Unfortunately, this knowledge is not widely applied because results are nuanced and disparate; there is no single, simple set of rules to apply. Finding the right solutions and applying them correctly is hard, especially when the goal is not to influence people one way or another, but to facilitate an accurate, evidence-based understanding. Unlike influence, which often has a measurable goal (like a purchase), effective neutralization of biases may vary with each person and is harder to assess.

How can we overcome these biases to give people an accurate understanding?

Problem 2: There is no algorithm

Data alone don’t account for each person’s unique decision-making factors and priorities, which may include emotional and experiential considerations that are hard to quantify. Every day, computers execute more than ¾ of the trades on Wall Street because the rules and goals are clear, but most people wouldn’t trust a computer to tell them which house to buy, what college major to choose, or which person to marry. Data and algorithms are increasingly assisting with decisions like these (think of the claims made by all the dating apps and web sites), but in the end each person must decide what’s important. Let’s say you find out you have the two BRCA gene mutations that make it more likely than not you that you will develop breast cancer. Would you get your healthy breasts removed for your peace of mind? Would you trust a computer to tell you the answer?

How can we help people consider the experiential factors and their values along with the objective evidence?

The impact of better decisions

The benefits of being better informed and having more control in health decision making may seem like a foregone conclusion. However, our review of many studies suggests a mixed picture. Information and control do not reliably improve physical health (Griffin et al., 2004). However, patients with more information and control may be more satisfied, are better prepared to cope with difficult news and demanding treatments, and have better rapport with their care team, all of which can improve quality of life (Bieber et al., 2006). In our own research, cancer patients described the angst of decision making throughout their long treatments, where any relief is welcome.

To explore how to overcome bias and deal with mixed types of decision making factors, we designed an online decision aid. It helps patients develop a more informed preference for treatment and prepare for a discussion with their care team, optimizing for tablets to make that discussion as easy as possible. We designed to counter biases and help patients consider quantitative and experiential factors together. Rigorous testing showed our design was better than a standard healthcare decision aid at creating a well-informed, confident preference. We built it for one specific decision in breast cancer treatment but the research findings and design patterns we used could apply to many health decisions.

Example Decision: should women add MRIs to their breast cancer surveillance plan?

After breast cancer treatment, women remain at much higher risk of developing new breast cancer for about 5 years. This phase is called surveillance, during which the guidelines recommend annual mammograms looking carefully for any recurrence. Doctors and patients have started to use MRIs in addition to mammograms, but there hasn’t been any firm evidence that it’s better. In fact, in other situations MRI is known to incorrectly suspect cancer that isn’t really there, potentially leading to painful biopsies or even the beginning of unnecessary treatment. It’s also more expensive and currently not covered by as many insurance plans. Group Health Research Institute, a non-profit dedicated to improving health, has a grant to determine which women, if any, would benefit from surveillance MRIs. The project is called SIMBA, and they asked us to help women understand the results and develop a more informed preference to discuss with their doctor.

Challenge 1: How might we help patients overcome their bias?

GHRI analyzed thousands of patients records to enable personalized predictions of the outcome of an MRI or mammogram. In our decision aid, a woman would enter information about her cancer and see her chances of detecting cancer or ruling it out. It also predicts the chances of an incorrect result (a ‘miss’ or a ‘false positive’). Giving people an accurate understanding of future risk is notoriously difficult. One of the classic findings in behavioral economics is that framing can change people’s preferences.

Framing

Framing boils down to the emphasis used in communicating outcomes. In one famous study, experimenters gave doctors and patients a hypothetical choice of two treatments. One group was told the proportion of people who survive after each treatment (“90 out of 100 survive”) and another group was given the proportion who die after each treatment (“10 out of 100 die”). The proportions matched so each group effectively got the same information, but there was a large difference in preference between the two groups (McNeil et al., 1982). This isn’t rational, but you can see how such a bias might come about. Being alive is our default; being dead is something to fear and avoid. The concepts hold unequal power, so their framing has unequal impact despite their numerical equivalence. On the one hand, a negative framing could lead women to overemphasize risk. On the other hand, positive framing may make patients feel less vulnerable. Research also suggests that framing effects are even more powerful when the person has a high degree of personal involvement with the topic, which is certainly true for breast cancer patients and their families (Cox & Cox, 2001).

Design implication: Using multiple frames can help overcome framing effects. We show users the predicted number of women like them who would get each of several different outcomes from the same total.

Social default bias

We found that some women have already formed a preference before they arrive at the SIMBA decision aid. The most common reason was that a friend or family member had already faced this choice and elected to have an MRI. A long history of research has shown that observing what others do has a strong influence on us, especially when we are uncertain and tired (Huh, et al., 2014). After struggling through exhausting chemotherapy and faced with a surveillance choice many women were unaware of, the choice of a friend or family member to have an MRI will feature prominently in a patient’s mind as a social default. This bias can be useful — imagine not knowing what food is safe to eat. How influential would it be to see someone else confidently pick something up and eat it? But in our case, it was important to counter the powerful effects of others’ choices when each woman’s cancer is different.

Design implication: Social defaults are less effective when users have more information. To counter this bias, we designed SIMBA to be much more comprehensive than most transactional apps and web sites, which are designed for efficiency. In fact, user feedback led us to add even more information.

Optimism bias

If a woman has witnessed someone else’s outcomes from MRI it may lead her to think that result (whether negative or positive) is more likely for her as well. When faced with the true, objective odds, optimism bias takes effect. If her guess turns out to be overly pessimistic, the woman would be likely to adjust to an accurate assessment. On the other hand, if her guess was overly rosy, she is much less likely to adjust her estimate to reality (Sharot, 2011). Again, you can see how this could be a useful bias that keeps us positive and productive in the face of unwelcome news. Unfortunately, it also compromises our ability to make unencumbered choices.

Design implication: Exposing the potential “losses” associated with a choice can counter the optimism bias. Our descriptions of MRI and mammogram include the negative costs, experiences, and side effects before women see their calculated results.

Affective forecasting errors

Affective forecasting errors occur when we overestimate the emotional impact of future events. We focus on one particular aspect of our future selves and fail to consider other parts of our lives or how good we are at adaptation over time (Ubel et al., 2003). Perhaps its most gut-wrenching medical impact is in the choice to end life. Many patients predict that they would not want to go on living if they suffered severe paralysis, disfigurement, or depended on a machine to survive. They do not consider that after several months they may emotionally adapt and find that being able to “think, communicate, create, and enjoy life,” makes it worthwhile (Groopman & Hartzband, 2011). Breast cancer patients may similarly overestimate the impacts of a certain outcome they want to avoid such as a recurrence of cancer or getting a false screening result, failing to consider their adaptability, other aspects of their future lives, or other characteristics of the alternative options. Research suggests that stories from other people who have been through the experiences can help readers develop more realistic expectations about how they would feel.

Design implication: We included several short narratives from real women who made different choices to minimize affective forecasting errors. We also included videos explaining exactly what would be involved in each imaging exam and what it would feel like.

Challenge 2: How might we incorporate subjective considerations effectively?

Often principles like the ones listed above are applied to deliberately influence people toward a preferable choice as defined by policymakers or companies in control. For the SIMBA decision aid, our goal was not to persuade but to instill an accurate understanding of risks. To understand why we didn’t simply try to convince women of the better option for them, we have to return to the idea of a “rational” or algorithmic choice.

Many healthcare decisions don’t have a clear and objectively “correct” choice. In the SIMBA decision aid, women will get personalized results showing their likelihood of detecting cancer, correctly rejecting it, or getting an incorrect result (a miss or false positive). No test accurately detects every cancer and rejects every non-cancer, so the choice of MRI and mammogram will likely involve a tradeoff. Imagine one testfalsely detects cancer 10 times more often than the other, leading to painful, time-consuming, expensive follow-up and emotional uncertainty before the cancer is ruled out. The tradeoff is that it correctly finds cancer 5 times more often. Is it worth it? What if it only correctly detects 1.5 times the number of cancers? 1.01? There are expert panels who make judgments on what is worthwhile, but so far those recommendations don’t exist for SIMBA. Also, those recommendations are usually made for groups, while we are presenting unique results for each woman.

Each woman’s priorities may be different, so it’s important to start from an accurate understanding of choice impacts.

Moreover, the numbers don’t tell the whole story. Choices may entail very different costs, side effects, and exam experiences that may be important for some women. SIMBA includes detailed descriptions, narratives, and videos of the two options followed by a side-by-side comparison of these factors.

Design implication: Each option includes a video explaining the experience and narratives with different perspectives on what was salient.

We found that the system cost, out of pocket cost, procedure complexity and discomfort, side effects, and availability all varied between the options. Women can select which ones are important to their decision making so that they appear in their final report alongside questions they’ve written and the calculated results. This combination of experiential and numeric results doesn’t ask a woman to quantify the importance of every issue so that a computer could provide an answer. Instead it prepares her to work through the answer with her care team. It gives everyone all the information they need to discuss the results and the values through which they’re seen.

Design implication: Factors beyond the explicit test results can be highlighted for discussion by patients. The selections then appear along with any other written concerns in the same report with the quantitative results.

The Importance of Human-Centered Design

While scholarly research defined key project challenges, we think good experiences come from involving users directly throughout the process of uncovering challenges, ideating solutions, and refining a direction. We conducted discovery and design sessions with patients throughout the development of SIMBA and made many changes for improved usability and completeness.

Applying insights from scholarly research works best as a supplement to human-centered processes, not a replacement. In some cases, scholarly research insights may override feedback from smaller-scale qualitative design research sessions. It’s useful that research provides rigor and evidence, but it’s also critical to test for efficacy in each unique context. More often we find that the insights are complementary, providing different aspects of an effective design and allowing us to understand both the quantitative and qualitative results. We encourage anyone designing with insights from social science to follow a human-centered process.

Outcomes

We tested the SIMBA decision aid against the industry standard to determine if we had overcome the challenges to help women develop better preferences for themselves. We found a standard decision aid design that supported dozens of decisions all in the same template. It was produced in alignment with guidelines from clinical researchers who have studied decision aids for years and the content was all reviewed by practicing clinicians. Among the dozens of decisions there was a remarkably similar one for breast cancer patients during routine screening before any cancer is detected. This made it easy to create content that was as similar as possible to SIMBA while keeping the tone and structure of the standard’s many other decision aids. The result it is a true comparison of alternative designs with very similar content. We randomly assigned 33 breast cancer patients to the standard and 33 to SIMBA in an online test.

We found women who used SIMBA were much better prepared for a decision. They were more informed, performing better on a knowledge quiz and reporting significantly higher confidence in their knowledge of risks and benefits. The choice was significantly easier to make, as well: they reported more clarity on the best choice for them, that their choice better reflected what was important to them, and that they were more satisfied with it. In the field of clinical decision making these outcomes are all part of significantly decreasing decisional conflict, a key measure when there is no objectively correct answer. Finally, despite the fact that SIMBA’s visual design was incomplete at the time of testing, it performed better in usability and user satisfaction than the standard.

Conclusion

The results make clear that insights from cognitive psychology and behavioral economics can be effectively applied in a human-centered design process. It’s a useful piece of evidence that good design accomplishes more than visual appeal; it is a key factor in achieving purpose. We believe the same insights and process would apply to many other healthcare decisions. Hopefully, it provides an instructive example for other purposes as well.

References

Bieber, C., Müller, K. G., Blumenstiel, K., Schneider, A., Richter, A., Wilke, S., … & Eich, W. (2006). Long-term effects of a shared decision-making intervention on physician–patient interaction and outcome in fibromyalgia: A qualitative and quantitative 1 year follow-up of a randomized controlled trial. Patient education and counseling, 63(3), 357–366.

Cox, D., & Cox, A. D. (2001). Communicating the consequences of early detection: The role of evidence and framing. Journal of Marketing, 65(3), 91–103.

Denes-Raj, V., Epstein, S., & Cole, J. (1995). The generality of the ratio-bias phenomenon. Personality and Social Psychology Bulletin, 21(10), 1083–1092.

Griffin, S. J., Kinmonth, A. L., Veltman, M. W., Gillard, S., Grant, J., & Stewart, M. (2004). Effect on health-related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. The Annals of Family Medicine, 2(6), 595–608.

Groopman, J., & Hartzband, P. (2011). Your Medical Mind: How to Decide What Is Right for You. New York: Penguin Group.

Huh, Y. E., Vosgerau, J., & Morewedge, C. K. (2014). Social defaults: Observed choices become choice defaults. Journal of Consumer Research, 41(3), 746–760.

McNeil, B. J., Pauker, S. G., Sox Jr, H. C., & Tversky, A. (1982). On the elicitation of preferences for alternative therapies. New England journal of medicine, 306(21), 1259–1262.

Price, M., Cameron, R., & Butow, P. (2007). Communicating risk information: the influence of graphical display format on quantitative information perception — accuracy, comprehension and preferences. Patient education and counseling, 69(1), 121–128.

Sharot, T. (2011). The optimism bias. Current Biology, 21(23), 941–945.

Ubel, P. A., Loewenstein, G., & Jepson, C. (2003). Whose quality of life? A commentary exploring discrepancies between health state evaluations of patients and the general public. Quality of life Research, 12(6), 599–607.

Zikmund‐Fisher, B. J., Fagerlin, A., & Ubel, P. A. (2008). Improving understanding of adjuvant therapy options by using simpler risk graphics. Cancer, 113(12), 3382–3390.

Zikmund-Fisher, B. J., Ubel, P. A., Smith, D. M., Derry, H. A., McClure, J. B., Stark, A., … & Fagerlin, A. (2008). Communicating side effect risks in a tamoxifen prophylaxis decision aid: the debiasing influence of pictographs. Patient education and counseling, 73(2), 209–214.

--

--

Artefact
Artefact Stories

Artefact is a visionary design firm. We partner with leaders to help create better futures for people, business and society.