Photo by Zach Lucero on Unsplash

Allowing ourselves to be puzzled

Towards improving our understanding of the world around us through puzzlement, empathy and conversation.

Judith
1789 Innovations
Published in
11 min readApr 28, 2020

--

This article is an invitation to discuss. I offer an account of how I think puzzlement is key to shifting mental models. This account aims, at this moment, at logical coherence. The conclusion, namely that puzzlement is required for mental models to shift and that empathy and communication can help achieve puzzlement, has, to my knowledge, not been tested empirically yet.

Puzzlement allows us to improve our understanding of the world

Our ideas, beliefs and convictions about the world have an influence on what we perceive, what we think and how we act. When something unexpected happens, we do not automatically shift these ideas. Conversely, sometimes we subconsciously bend reality to match our idea. However, an unexpected event can serve to make our ideas about the world explicit, if we are ready to be surprised and puzzled, and to improve upon our convictions and ideas to gain a better understanding of the world. The central thesis in this piece is this: Noticing puzzlement is required if we want mental models to shift.

Our ideas about the world

The concept ‘mental model’ serves, here, as a tool to systematically describe our ideas about the world — “an individual’s beliefs, values, and assumptions” (Groesser 2012, 2195). Mental models can be understood as (flawed) representations of reality that we carry around in our heads and which we “use to interact with the world” around us (Jones et al. 2011).

A mental model is flawed as, being a model, it can only ever be an incomplete copy of reality: As system scientist Jay W. Forrester points out, one does not have the entirety of reality in one’s head, “a city or a government, or a country”. Instead, there are only “selected concepts and relationships”, which are used to “represent the real system”. (Forrester 1971, 213). In this way, mental models are to reality what a map is to a landscape: They represent the world without reproducing the whole of reality with all its intricacies.

I understand mental models to be made up of assumptions about relationships between entities in the real world. For instance, a mental model might comprise the assumption that people always act in such ways as to increase their personal gain and reduce their personal loss. Mental models and the assumptions they are made up of are often unconscious to those who apply them: We do not think about them, because we are not aware that we have these assumptions and apply them in our actions and decisions, often not even that they are assumptions. If the assumption about people always acting to increase their personal gain is challenged, often we say: ‘People just are like that.’

All models are wrong, but some are useful”

Statistician George Box famously claims that “all models are wrong, but some are useful.” (Box 1979; see also Box 1976, 792) In the case of mental models, they are wrong because they are incomplete and often inconsistent (see Jones 2011). However, they are useful, it seems, because they provide us with schemes to interpret incoming information and to act on it. We do not have to think consciously about everything that happens around us all the time, but we can perceive, decide and act on many issues ‘on autopilot’.

Psychologist Daniel Kahneman calls this System 1 thinking: “System 1 operates automatically and quickly, with little or no effort, and no sense of voluntary control.” (Kahneman 2011, 20) This is the case, for example, when we collect information by looking at road signs or glancing at a watch. In these situations, if we have learned to read the alphabet or learned to tell time from a watch, respectively, we cannot help ourselves: Seeing is knowing. In contrast, System 2 thinking requires attention and is, accordingly, disrupted when attention is taken away. System 2 kicks in, for instance, when you have a complex computation to solve, or when I try to compose this article.

Mental models are useful as they save time and energy. When a person smiles and waves at me in the street, I do not have to pause and think strenuously about what they might want to tell me or pass before my conscious mind’s eye all conceivable possibilities. Based on a mental model that contains the assumption ‘When people that you know smile and wave at you in the street, they want to say hello’, I can react instantly and wave back — and am free to use my mental time and energy for more complex tasks.

Incoming information and mental model: a circular relationship

Incoming information, as perceptions, can be understood to be filtered through our mental models. In this way, mental models reduce ambiguity. In language philosophy, ambiguity can be defined as “a property enjoyed by signs that bear multiple (legitimate) interpretations“ (Sennet 2016). In colloquial use, the term is also used to signify a property of situations that can be interpreted in various ways. By applying my assumption that people want to say hello when waving at me, perceived ambiguity is reduced.

This is relevant beyond mere perception: It also affects our decision-making and action. Decision theorists Amos Tversky and Daniel Kahneman find that the representation, or framing, of a specific problem, influences the preferences of individuals (Tversky/ Kahneman 1981: 457). For example, my neighbor might offer to help me fix my leaking bathroom sink. I might interpret my neighbor’s offer as an act of kindness and gladly accept. Or I might understand it as an act of self-interested calculation, aiming at enlisting my help in turn next time he wants to refurbish his apartment. Depending on the representation, I might decide differently how to react to my neighbor’s offer.

Incoming information, it seems, is not only filtered by mental models but also serves to confirm existing models. Psychology identifies an effect called confirmation bias (Nickerson 1998, 175): We subconsciously select and filter information, ignoring those parts that contradict our convictions, and then validate our mental models with those parts which confirm them. For instance, if I believe that people act in such ways as to increase their personal gain, I might assume that my neighbor acts out of self-interest. Possibly, I do not consider that it might be because he wants to help. And accordingly, my neighbor’s offer serves as a confirmation of my assumption, instead of allowing me to broaden my mind to think about other ways to understand his offer.

An external trigger is not enough to change our convictions

The confirmation bias effect indicates that it can take more than an unexpected external trigger — such as my neighbor’s offer to help me fix my sink — in order for us to start questioning our mental models. It is easy for us, it seems, to make reality fit our preconceived ideas. Michael Butter, a conspiracy theory researcher, recently argued that the current situation arising around Covid-19, instead of causing people to reflect upon their convictions, serves as “a new brick in an existing set of building blocks” (I translated this to English; the original German version of this quote is from Spiewak 2020). For instance, if someone was convinced that politicians exclusively sought to increase their own power, chances are this someone will understand political decisions around Covid-19 as attempts of politicians to fan hysteria in order to be granted more powers by the people, and not as attempts to protect individuals and society.

Applying the same mental model over and over again, to every new situation that comes up, can be problematic: If social reality is defined by volatility, uncertainty, complexity and ambiguity (VUCA), it does not necessarily conform with all mental models that we built on past experiences and successfully applied in the past. Accordingly, the decisions and actions we take on the basis of these models would fail to take a changed reality into account. Then, it might be useful in some situations to be able to step out of the — usually very useful — mental models that we have built for ourselves.

But how can mental models be shifted?

Puzzlement: Accepting, not reducing ambiguity

Psychologist and economist Chris Argyris tells us that “people can be taught how to recognize the reasoning they use when they design and implement their actions” (Argyris 1991, 11). Argyris calls this double-loop learning. While single-loop learning involves acquiring the ability to solve problems, double-loop learning allows us to question how we define problems to begin with. For example, when I notice that I have run out of milk, single-loop learning allows me to be aware that I can go to the supermarket to buy more. Double-loop learning, however, allows me to reflect on whether it makes sense to buy milk. Maybe I do not drink milk in my coffee, and I will not have guests drinking coffee with me in the foreseeable future. In those cases, applying the single loop might end with milk spoiling in my fridge because I do not actually need it or consume it.

How, then, do I move from autopilot to conscious awareness of my mental models? The hypothesis is that puzzlement is key.

Puzzlement, in this context, is a mental state which arises when I notice that incoming information does not fit perfectly with the expectations that I formed based on my mental model. For instance, if I assumed that people always act in such ways as to increase their personal gain, I would not expect my neighbor to offer me help with my leaking sink. When incoming information does not conform with my expectations — i.e., when my neighbor actually does offer me his help — I can take one of two or more options: Path number one, I can make additional assumptions in order to fit reality with my mental model. For example, I could assume that my neighbor probably wants to refurbish his apartment soon and plans to enlist my help, in turn, for this endeavor. Or, path number two, I can critically reflect upon my mental model, identify assumptions that do not seem to conform with the incoming information, and consider alternatives that might allow me to make a better sense of this information.

The point is that there is a choice to make. It seems that I am not an automaton, doomed to make the same mistakes over and over again once a mental model has been established in my mind. Path number two allows me to be conscious of and take into account ambiguity when I am confronted with it. It is important to note that puzzlement is not always fun. It requires time and energy. By intervening consciously before my mental model can take hold, I put myself in a position in which one of my convictions is up for disposition. This can be scary and frustrating, especially when I have applied this conviction in my past decisions. But it might also allow me to shift my mental model in such a way that it becomes a better map of the landscape of reality, and in turn, make better decisions. The model can never be perfect, but it can improve.

How, then, do we shift to path number two?

Towards learning how to puzzle: empathy and conversation

It seems that it is one important feature of mental models that ‘protects’ them against puzzlement: their property of not being conscious of their bearers. However, assuming that people are, sometimes, capable of shifting their mental models, there must be ways in which to render mental models conscious and explicit to ourselves. Drawing from my own experience, I posit that (at least) two tools can help:

  • The first tool is cognitive empathy, “the ability to take the mental perspective of others, allowing one to make inferences about their mental or emotional states “ (Cox et al. 2012, 727). Cognitive empathy allows me to try and understand others’ actions and decisions by thinking about which mental models they might have applied. This, in turn, provides a basis to outline differences between the others’ mental models and my own. This differentiation allows me to spot assumptions that are part of my own mental model.
  • The second tool is conversation or dialogue, in the sense of an exchange of ideas between two or more people. In dialogue, every participant who states a claim can be asked to justify this claim. In this way, assumptions can be unveiled which I took, beforehand, as matters of course. For example, take the assumption that people always act in such ways as to increase their personal gain. It might be that I, subconsciously, assume this to be true and that my dialogue partner does not. When talking about the episode concerning my neighbor and the sink, our contradictory assumptions might become explicit when they clash. For instance, I might claim, ‘people just are like that’, and my partner asks, ‘Why do you think so?’. In this joint endeavor, we can then try to get at the best explanation for all the incoming information, avoiding, if we are lucky, dogmatism and inconsistency.

Neither cognitive empathy nor this somewhat Socratic approach to conversation guarantees that we open our mind to allow for puzzlement. It is conceivable also that we react to the ambiguity thus introduced by becoming defensive: not thinking about a better way to represent reality, but instead putting our energy into making reality fit the old model. But these tools offer starting points that can help us navigate reality better by improving upon the mental models we have in our heads.

Notes

Argyris, C. (1991): “Teaching Smart People How to Learn”, Harvard Business Review 4(2), 4–15.

Box, G. E. P. (1976): “Science and statistics”, Journal of the American Statistical Association 71(356): 791–799.

Box, G. E. P. (1979): “Robustness in the strategy of scientific model building”, in: Launer, R. L./ Wilkinson, G. N. (eds.): Robustness in statistics, New York: Academic Press, 201–236.

Cox, C. L./ Uddin, L. Q./ Di Martino, A./ Castellanos, F. X./ Milham, M. P./ Kelly, C. (2012): “The balance between feeling and knowing: affective and cognitive empathy are reflected in the brain’s intrinsic functional dynamics”, Social Cognitive and Affective Neuroscience 7(6), 727–737.

Collins, A./ Gentner, D. (1987): “How people construct mental models”, in: Holland, D./ Quinn, N. (eds.) Cultural models in language and thought, Cambridge: Cambridge University Press, 243–265.

Forrester, J. W. (1971): “Counterintuitive behavior of social systems.” in: Collected Papers of J. W. Forrester, Cambridge: Wright-Allen Press, 211–244.

Groesser, S. N. (2012): “Mental models of dynamic systems”, in: Seel, N. M. (ed.): The encyclopedia of sciences of learning 5, 2195–2200. New York: Springer.

Jones, N. A./ Ross, H./ Lynam, T./ Perez, P./ Leitch, A. (2011): “Mental Models: An interdisciplinary synthesis of theory and methods”, Ecology and Society 16(1), 46.

Kahneman, D. (2011): Thinking, Fast and Slow, London: Penguin.

Maslow, A. (1970): The Psychology of Science, Chicago: Chicago Gateway.

Nickerson, R. S. (1998): “Confirmation bias: A ubiquitous phenomenon in many guises”, Review of General Psychology 2(2), 175–220.

Sennet, A. (2016): “Ambiguity”, in: Zalta, E. N. (ed.) The Stanford Encyclopedia of Philosophy, available at: <https://plato.stanford.edu/archives/spr2016/entries/ambiguity/>.

Spiewak, M. (2020): „Coronavirus: ‘Glauben Sie nicht jedem, der einen Doktortitel hat’“, Zeit Online, April 1 2020, available at: https://www.zeit.de/wissen/gesundheit/2020-03/coronavirus-verschwoerungstheorien-entstehung-angst-ungewissheit

Tversky, A. /Kahneman, D. (1981): „The Framing of Decisions and the Psychology of Choice“, Science New Series 211(4481), 453–458.

Wimmer, R. (2012): „Die neuere Systemtheorie und ihre Implikationen für das Verständnis von Organisation, Führung und Management“, Rüegg-Stürm, J./ Bieger, T. (eds.) Unternehmerisches Management. Herausforderungen und Perspektiven, Göttingen: Haupt, 1–65.

1 Translated by JK; German original: „Das Coronavirus dient als neues Klötzchen eines bestehenden Baukastens.“

--

--