decision making in software engineering

Amy J. Ko
Bits and Behavior
Published in
4 min readDec 28, 2010

I just finished reading Jonah Lehrer’s How We Decide, a fascinating survey of recent (and not so recent) scholarly literature on decision making, behavioral economics, and neuroscience. The central thesis, or perhaps, the central extract from the large body of work on the subject, is that while we often think of the rational parts of our minds as central to effective decision making, the emotional parts of our minds are in fact often more objective. Jonah argues, through the extensive overview of several hundreds of studies spanning economics, psychology, marketing, and medicine, that this is because our brains actually take in much more information that the working memory our rational mind depends on could ever hope to process. Therefore, when our instincts (assuming our instincts come from practiced, expert behavior) are often the better informed of the two. There are obviously may subtleties to this point (and Lehrer does a great job explaining them), but the bottom line comes down to a fairly straightforward to understand (though difficult to execute) rubric for decision making:

  • If the problem is novel (to you), instincts will not suffice. This is a job not only for the rational part of your mind, but for our creativity. Our instincts can often help with the subproblems of novel problems, but the rational mind must integrate them.
  • If the problem is routine and can be fully characterized with a few well defined variables (or simplified in this way because the decision’s consequence matters little), let reason carefully assess and analyze the options.
  • If the problem is routine, but cannot be simplified to a few well defined variables, use your rationale mind to identify what information is and is not available, but let the emotional part of your mind process and analyze it. The instinct resulting from this is your mind’s expert judgement.

These ideas about human decision making have many fascinating implications for software engineering. For one, the rubric above suggests that software engineers need a keen ability to know when a problem is new to them, so that they may apply different strategies to solving it. I’ve seen in many of the courses I’ve taught involving software engineering decisions that novice engineers often view every problem as routine, where some prior solution is likely to solve the new problem. Recognizing when this is not the case is a crucial skill. This might involve, for example, stepping away from a problem after trying to apply some known solutions, and using the rational mind to judge whether the problem has novel characteristics that deserve creative solutions.

Not only should software engineers be able to recognize a problem as novel, but they must be able to judge whether it is novel to them or novel to everyone. Novice software developers quickly realize that most problems they encounter have been solved already; the challenge thus is not to create solutions, but to find existing solutions whose assumptions fit the problem at hand. The same is true in solution spaces of a smaller scope; for example, some problems may be new to an individual, but old hat to an organization. Novice software engineers should know when to consult coworkers for this expertise.

Both of the above abilities probably come with experience; one issue that may not, however, is knowing what you don’t know. For example, I routinely see experts struggle with bug triage decisions, making quick emotional judgements about a bug report’s legitimacy, fixability, or impact, and then ignore information in the report that would disconfirm their judgement. What’s missing from these decisions is a process by which software engineers use their rational mind to carefully enumerate what information they don’t have, or what information is suspect. Fighting this confirmation bias and getting comfortable with uncertainty is a fundamental part of making effective decisions about complex problems.

In his discussion of aviation, Lehrer makes an interesting point about how computers can help pilots with such biases:

The reason planes are so safe, even though both the pilot and autopilot are fallible, is that both systems are constantly working to correct each other. Mistakes are fixed before they spiral out of control.

What would bug triage look like if humans and computers collaborated together to judge and analyze information about bugs? Would it help if bug reports highlighted what information was missing from bug reports, such as details about the defect’s impact on users, details about who those users are, and information about what components a bug concerns?

Of course, bug triage is just one of many decision making processes in software engineering. Requirements engineering involves a great deal of tradeoff and analysis and knowledge about human decision making might improve designers’ confidence that what they are specifying will satisfy user needs. Debugging and other types of diagnostic activity share the same diversity of novel and routine problems and may benefit from the same kind of metacognitive strategies. Bug fixing is also rife with choices about the scope of a change and the implications of a change on both users, interacting systems, and other components. In all of these, it may be of the utmost importance to provide novice software developers practice in recognizing and characterizing the problems encounter so that they may choose effective strategies to solve them.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.