Mind Projection Fallacy

Harpreet Matharoo
4 min readMay 3, 2023

--

As intelligent beings, we seek to comprehend the world that surrounds us. Our transformation from primitive beings to what we are today has been shaped by our persistent efforts to make sense of our environment. Each successful attempt promoted us to the next level of evolution. At first, these attempts were grounded in reality as they mostly related to the mundane needs of our primitive ancestors. With time, we ventured further to learn a more abstract understanding of our world. We created elaborate models inspired by our sense experiences. We tested them rigorously against collected data and evolved them further in case they disagreed with our hypothesis. But when the data agreed with the hypothesis, we accepted the models supported by these hypotheses as the true nature of reality. Philosopher and physicist E.T. Jaynes, also a staunch supporter of the Bayesian approach to probability, warned us against making such a correlation[1, 2]. He introduced the notion of mind-projection fallacy, where we fail to distinguish between our mind’s imagined model of the world and reality itself. Generally, we observe two complementary forms of this fallacy[2]. They can be informally expressed as follows — my imagined model of nature <-> nature of reality and my ignorance <-> indeterminacy of nature

The former form of fallacy is also referred to as positive. It is also obvious and easier to understand. Jaynes elaborated on this with the example of quantum mechanics. But as a computationalist, I will take the example of simulating fluid flow instead. Many years ago, I was discussing with a senior engineer the importance of validating simulation results against experimental results. The problem of interest was simulating the distribution of temperature in an industrial furnace. The engineer maintained a firm stance that experimental results are unnecessary and that the model produced for this simulation will be correct because it is based on the governing equations of fluid flow. In this case, they committed the mind-projection fallacy by believing the application of the governing equation to the digital twin of the furnace to be the same as the physics at play in the real furnace. For example, the surface finish of the material can greatly affect fluid flow and heat transfer. It is important to validate the simulation results against experimental results so our beliefs are supported. Even if our model is correct, there is still a small chance that the agreement between the experiments and the simulation occurred due to the presence of some coincidental irregularities.

The second form of the fallacy is a lot more elusive and interesting. In this case, we imagine our ignorance to be a sign of the indeterminate nature of reality. The best example to demonstrate this is that of randomness. Suppose you have an urn consisting of 10 red and 10 white balls. You can draw a ball from the urn but cannot see the ball until it has been drawn. Since there are as many white balls as the red ones, you are equally likely to sample either ball. Now suppose you sample a white ball from the urn. You put it back. What happens if you try to sample another ball from the urn? Are you more likely to sample white again? It is likely the case since the white ball is going to be on top of the other balls. To get around this problem, we shake the urn vigorously. The hope is this will introduce enough randomness to the composition of the urn that it returns to the initial state where both types of balls are equally likely to be sampled. The assumption is problematic, however. Here, we assume the mechanism of shaking the urn is so complex that it makes its state indeterminate. However, this perceived indeterminacy is due to our own ignorance. Perhaps there is some mechanism to this shaking that we can trace the urn to a definite state from which we are more certain about which ball will be sampled. Instead, we introduced the concept of randomness and attributed it to the indeterminacy of nature when our understanding fell short. As a result, we fall prey to the mind-projection fallacy again.

In conclusion, we have an inherent capacity to attribute the models imagined by our mind to be the physical attributes of nature. Perhaps, there is some hint of truth in our model, but there is no way to verify the nature exactly in some cases. By recognizing and being aware of this fallacy, we can better understand the limitations of our knowledge and avoid making unwarranted assumptions about the nature of reality. We are less prone to over-committing to a model that “makes sense” based on our experiences but is not tested and validated rigorously. Employing plausible reasoning through the Bayesian approach can enhance our ability to navigate these limitations. This method allows us to assign a subjective degree of plausibility to our beliefs and update them based on new evidence, helping us iteratively refine our understanding and avoid overconfidence in our assumptions.

References

[1] Jaynes, E. T. (2003). Elementary Sampling Theory, Probability theory: The logic of science(pp. 51–84). Cambridge university press.

[2] Jaynes, E. T. (1990). Probability theory as logic. Maximum entropy and Bayesian methods, 1–16.

--

--

Harpreet Matharoo

Aspiring ML Engineer. Experienced Solutions Engineer. Currently pursuing OMSCS at Georgia Tech.