Brad Allenby
5 min readJun 28, 2021

Solving Problems Rationally in an Irrational World

Engineering is in many ways structured, usually quantitative, problem solving. Whether it is writing a software module that must integrate smoothly with existing legacy systems, balancing the many different design objectives and constraints that go into modern automobile design, or trying to update, operate, and maintain an urban transportation network, engineers are given massive, often contradictory, inputs and expected to produce solutions that work in the real world. And because engineering is a quantitative domain, as many as possible of the fuzzy and qualitative inputs are quantified so they can be included in the relevant models.

Problems thus arise when some inputs are so complex they can’t be quantified, because engineers want numbers and predictable behaviors. This is especially true with what’s known as “wicked complexity” — complexity that arises in the psychological, social, or cultural domains simply because of the nature of the human. You can’t optimize with wicked complexity; rather, you have to “satisfice” — come up with solutions that make most parties happy enough. In struggling with how to do this, engineers are remarkably similar to neoclassical economists, who routinely posit a “rational economic actor” with preferences that are predictable, rational, and stable — an individual’s “utility function”. Of course, economists know that this is an oversimplification of real human behavior, but they, like engineers, need to do it to make their mathematical models tractable. It is thus helpful to consider some of the problems this oversimplification raises for economics.

In many cases, actually, neoclassical models of human economic behavior may be a reasonable oversimplification; it makes modeling possible, and gives good enough answers when society and the economy are fairly stable. It’s similar to Newtonian physics . . . we all know that applying Newton’s Laws gives wrong answers because, in order to be as simple and understandable as they are, they ignore things such as friction. But, given the granularity we usually work at, that doesn’t matter. The models give answers that are good enough. Similarly, when deeper elements of the human psyche aren’t being challenged — identity or culture, say — people might in most cases display relatively stable preferences, and so the economists’ utility curve can be an appropriate, if always rough, approximation of behavior. And, for an engineer, first order assumptions about preferences may be sufficient to make the trade-offs in design that every product or infrastructure project requires.

Problems with that approach arise under two circumstances. First, because it is easy to forget that one is working with a dramatic oversimplification that only works within certain bounds, it is easy to push the model too far. In other words, the model works given its assumptions — but push to a space where those assumptions, such as “rational economic actor,” no longer hold, and the model will fail. It is not that the model is flawed; it is that trying to use it beyond its implicit boundaries is a failure mode. So, for example, one might design a levee system within the Florida Everglades region using traditional civil engineering techniques, and it would be fine. But if one is asked to “design the Everglades,” which brings in powerful and somewhat nonrational psychological, cultural, and institutional interests, from farmers who have lived in the area for generations, to Native American nations, to deep greens, traditional engineering approaches fail. The boundary implied by the rational methods of traditional engineering have been breached, and wicked complexity unleashed.

Another problem is that many engineers — or economists, for that matter — tend to extend their mental models and the underlying assumption of applied rationality to things like political behavior, often without conscious recognition that they are doing so. For example, in US politics Democrats often wonder “how can poorer, less educated white males be in favor of Trump when he is so obviously working against their economic self-interest?”. The answer is that the question itself is a category error: you’re asking a question about economic self-interest, when what is involved in the decision is the wicked complexity of human psychology and identity. If I’m relatively ok and I feel pretty much in control and comfortable in my world, I will vote my economic self-interest. But if I feel my identity is being profoundly challenged, then the heck with my economic interests; my struggle is far more profound. I am fighting for meaning, for my narrative, for the only thing that holds me together in a rapidly and unpredictably changing world.

It is notable that a lot of the context within which engineering and technology policy is performed, especially at national and global levels, assumes enlightened self-interest (that is, applied rationality). The United Nations operates that way habitually — consider the list of metrics that flow down from any UN sustainability program. But of course, it’s precisely in such highly normative domains such as sustainability, with high cultural content, when the oversimplification of applied rationality tends to fail. And that isn’t unusual — think of the average Russian, who shows an amazing ability to absorb economic hardship so long as they believe their country — their identity — shines among the civilizations of the world. Their life has meaning. You can take away someone’s food, and usually get away with it. But take away their meaning, and you will fight a primal response. The Nazis weren’t beaten in Russia in WWII by utility curves. They were beaten by identity.

In today’s world, the boundaries of applied rationality are shifting in part because identity and meaning, already deeply complex social, political and cultural domains, are becoming both design spaces and battlespaces. Indeed, the Russians have run rings around the U.S. in the domain of weaponized narrative and manipulation of identity, successfully exacerbating deep divides in American society around issues such as race. Russia’s Channel One, RT, Sputnik, and other information outlets combine behavioral economics, lived experience, psychology, neuroscience, and post-modernist contempt for Enlightenment principles — at least the Voltaire/applied rationality side of things; postmodernists tend to rock Rousseau — to create a potent acid with which to attack and dissolve pluralistic society. When combined with a strategic doctrine which holds that all aspects of an adversary’s civilization are fair game for attack, the added complexity challenges our existing mental models of problem solving.

This concerns engineers in two ways. The first is very applied, but not yet a part of engineering practice or education: as information systems become part of most engineering products, processes, and infrastructure, engineers need to understand that cybersecurity is not incidental, but core to good engineering. Second, while existing engineering models and quantitative problem-solving tools remain useful, and will be adequate for many tasks, the cultural and geopolitical landscape is shifting, and wicked complexity and nonrationality becoming more powerful forces in a rapidly changing world.

This requires different ways of thinking, of teaching, and of professional practice. But at least there is a lodestone that has been effective for centuries, and that can continue to guide the technologist and the engineer. For the real test of engineered systems remains the same: does it work?

Brad Allenby

Brad Allenby, J.D., Ph.D., is President’s Professor of Engineering, and Lincoln Professor of Engineering and Ethics, at Arizona State University.