A New Model for Conducting Thought Experiments

Advancing from imagination to meaningful simulation

Sarang Deshpande
World in Mind
15 min readJul 12, 2020

--

Photo by Joel Bengs on Unsplash

Shimmery waters at dusk, ice cold. The light breeze urging the surface into waves.

The salt burns in the eyes of two men fighting the pull of the ocean.

They breach the surface every now and then. The wreckage of their vessel is blurry in the frigid mist. As he gasps for air one time, there is no time for a sigh of relief.

A piece of sweet, undone sea-home floats to his right. The first man latches on to the plank for dear life, delivered from the maw. Right that moment, the other man feels his heart sink even as he struggles to stay afloat. He lashes out frantically, in a burst of sheer will.

Kinship is for the living, he thinks, not the dying.

He shoves his right hand above the plank, into the harrowed face white with dread. The two men jostle for a second. But the punch of will is strong and centered.

The first man falls away. The waves buffet him, then swallow him whole.

This is no day for relief. Before the survivor can rest his eyes, a rescue boat arrives. The crew, having witnessed the events unfold as they approached, begrudgingly pulls the man to safety. A murder trial awaits.

Deliverance is temporary.

Carnaedes of Cyrene asks, “Can the survivor be convicted of murder?” In response to this self-imposed question, he argued, in the 2nd century BC, that the survivor had merely acted in self-defence, which could be an ethically ambiguous, but nonetheless reflexive, motive. This is a famous thought experiment called the ‘Plank of Carnaedes’, which calls to question morality in extremis.

Thought experiments have been the basis of both inductive and deductive reasoning for centuries. Experiments in the mind, with their hyper-realism or hyper-virtualism, offer a safe space to deliberate the moral, ethical, or physical aspects of universal and human reality. They preclude intellectual confrontation by gusts of uncertainty that are brought on in reality, and help reason in parts to make up the whole. Philosophy and physics are no strangers to thought experiments, and both fields offer astounding trips through the mental foam with their many, many examples.

The Errant Trolleys

In his essay in Aeon Magazine [1], Prof. James Wilson invokes his experiences in teaching ethics via thought experiments, and the many drawbacks they pose for different practitioners. The issue with bringing an adept practitioner and a thought experiment together is that the practitioner often knows too much about the field to find certain thought experiments palatable or even plausible. The professor of philosophy at University College London found that, for instance, clinicians could be rather interrogative when discussing the thought experiment called ‘The Violinist’ [12, 13], and stray apparently further from the intended conception of the experiment simply due to their knowledge of actual medical practice.

This is a known occupational hazard for philosophers, and presumably physicists too. Prof. Wilson says we must bear in mind one question before we dismiss such a line of questioning for an inability to separate ethical concerns from the mundane details of the experiment scenario:

“How should we determine what are the ethically relevant features of a situation?”

He questions why a chair-ridden philosopher may be in a better position to define the ethically relevant features of ‘The Violinist’ than a clinician with a practised hand. Conflating with real life the ideas or deductions from imagination is where the problem lurks. He goes on to argue that on both counts — one of equivalence with scientific experiments, and the other as an appeal to imagination — thought experiments are tenuous and flawed.

In other words, ‘trolley problems have their own problems [1, 2].

Validity crisis

Prof. Wilson advances the argument that thought experiments may not be the soundest methods to conduct ethical and moral science. Per him, they come up short on the integrity of their external validity. Their applicability to real-life situations is difficult to quantify, which diminishes the credence of their results. Thought experiments fare well on internal validity, in so far as the predetermined scenarios as defined by the experimenter are sound, and that the resulting hypothesis validation is often fully logical.

This mismatch has been long identified and criticized, becoming one of the strongest arguments towards the fallibility of thought experiments as the basis of philosophical ethics. Yet, as Wilson argues, we may not have better tools. While he himself offers examples of ways in which thought experiments fail to translate to real life, he also defends their use, albeit with the generous application of humility.

From the perspective of an engineer (which I happen to be), this might be a sub-optimal solution, though.

This mismatch has been long identified and criticized, becoming one of the strongest arguments towards the fallibility of thought experiments as the basis of philosophical ethics.

Bring in the agents

The primary difficulty that Wilson presents, in continuation of his predecessors’ warnings, is the challenge of quantifying real-life circumstances and juxtaposing an ethical thought experiment on to a real circumstance with real human decision-makers. Decision-making is a preeminent theme in the exploration of philosophical ethics, so it would be a good example here (and perhaps the only).

As difficult as it is to reconcile the eccentricities of reality with the strict structure of the thought experiment, there is at least one similarity that can be exploited. Both reality and thought experiments in ethics deal with agents — human participants that can affect outcomes by making decisions and, optionally, acting on them. Reality, as is often scientifically recognized, is a web of agent-based decisions sprinkled with probabilistic externalities.

Today, we have a decent method to simulate this chaotic system — agent-based modelling [3].

Going from in mente to in silico can benefit philosophical ethics, and presumably has. Advanced computation can help us perform thought experiments at a higher fidelity, in a silicon brain, allowing us to vicariously view the variety of outcomes. In adopting this stance, we may choose to take a probabilistic approach to the evaluation of the applicability of ethical trolley problems.

But how exactly do we make a simulation of non-reality useful and meaningful in reality?

The goal of the model

Ethics are tools, not merely static knowledge. Every human must be a beneficiary of, and a participant in, the continual stirring of the ethical brew. Ethics are normative influences for the practice of law, and considering artificial and natural law falls within the ambit of ethics. Thought experiments, unfortunately, are devoid of this participation, being confined mostly to academic volumes and debates. I must argue that this is now an addressable shortcoming.

“The broader the precedents that thought experiments can set, the more powerful they will be for ethical thinking.”

— Prof. James Wilson

In the vein prescribed by Prof. Wilson, there must be a method to ensure that the precedents that thought experiments can set are broadened, self-reinforcingly, by every single application of the method itself. To achieve such a method, we must first accentuate the details of its architecture.

A model is not a ‘thing in itself’ but is ‘about’ something — otherwise it is just a computer program or a set of equations,” explains Bruce Edmonds [10]. This is the architecture of simulation that we must change in order to achieve our goal. The purpose of a model is to mimic an explained phenomenon. By creating adjustable models, we try to reason inwards by calibrating the model to observed phenomena, such that the final result offers some explanatory value and future usefulness. Note that a mathematical model may not project meaning onto the phenomenon of its own accord, and doing so is not its additional purpose. By maintaining the same metric, we may extend the use of models to a simulation of ethical thought experiments.

What would we like such a model to do?

1. Bake in known theories of behaviour to elicit expectedly divergent results under different circumstances.

2. Contextualize the spectrum of outcomes to real life applicability.

3. Assist in looking at outcomes when certain ethical decisions are made in different ways and over time.

4. Open ethical debate upon the common, non-partisan platform that is the model.

Note what such a model cannot do:

It can’t give a single, indisputable, valid answer to the ethical quandary.

It can’t prescribe the appropriate decision under the experiment circumstances.

We must recognize that as much as ethics pertains in part to situations in which decisions must be made, the underlying purpose is to understand how real people would actually — variably — make these decisions, with or without the assistance of ethical corollaries. The ultimate goal, then, is to improve the outcomes of such decisions on a more prosaic level so as to impact macro outcomes. Therefore, the field of psychology is as important to the evaluation and application of thought experiments as is the birthing of the thought experiments via ethical contemplation.

The tension in ethical decisions arises from the fact that people may perceive or use dissimilar decision policies based on their evaluation of outcomes. The ‘utility monster’ thought experiment is a good example [14, 15]. In allocating resources, do we maximize for total utility or average utility? On ground, the two choices result in vastly different outcomes, and people would have reason to choose variably. For some, there may exist acceptable outcomes outside the two choices, thereby constituting a spectrum rather than binary choices. Simulating thought experiments must account for this variety, and not strive to reach a singular solution.

A New Model for Thought Experimenting

Georg Brun offers a typology of ethical thought experiments [16] that is useful to consider before delineating our new model architecture. He classifies ethical thought experiments into four broad categories based on their function: epistemic thought experiments; illustrative and rhetorical thought experiments; heuristic thought experiments; and thought experiments with a theory-internal function.

Of import to us here are heuristic thought experiments, which consist of core thought experiments suited for independent analysis, especially the consideration of factors in the original specification that may influence judgements (said factors being known from real life experience). In the case of trolley problems, some empirical research exists that shows a correlation between skin colour and the judgement criteria of some people [16]. This is precisely the conundrum where we can make use of automation.

A significant outcrop of operating in reality is that real human decision-makers still need to make decisions even if the epistemic nature of the ethical quandary is not resolved.

If there can exist a large range of such factors, or a large range of possible outcomes, it is difficult to build each instance up manually, then build reasoning around each individual instance to eventually be able to analyze the collective. Instead, agent-based modelling in conjunction with other established statistical methods can help simplify the task. When we move past the purely epistemic function of thought experiments, we are effectively trying to correlate with reality. A significant outcrop of operating in reality is that real human decision-makers still need to make decisions even if the epistemic nature of the ethical quandary is not resolved. The option we have is to peer at the variety and spot patterns, something we happen to be innately good at.

Rationale for simulation of ethics. Image by author.

To build the human agent into a replicable model, there exist architecture patterns such as PECSPhysical conditions, Emotional state, Cognitive capabilities, Social status [7, 8]. The PECS model architecture also accounts for inter-agent communication and learning abilities, which is vital to our simulation. Such models also account for human behaviour not always being rational, in an effort at realism. “The human being is [consequently] perceived as a psychosomatic unit with cognitive capacities who is embedded in a social environment.” [7] Note that PECS is not the only available agent model, and we may create our own. So first, we have our “agent” — described in the form that is best known to us at the time of modelling.

Next, the underlying reason for conducting the simulation must be defined. For example, it may be relevant to consider the impact of the trolley problem thought experiment on how autonomous cars should be programmed to behave. There is a tangible action or decision that needs to be taken (“commit to code”), which is impacted by a single core thought experiment, or a hierarchy thereof. How well we are able to define the reason of existence of the model affects the results we will be able to achieve. In the case of autonomous vehicles, the hierarchy may consist of the Trolley problem [6, 9], the Pond problem, the utility thought experiments, the Plank of Carnaedes, and so on — these core thought experiments can be said to exist at each juncture of reasoning.

We have already narrowed our focus to ethical problems where decisions are involved, not on ones that deal with natural phenomena and the nature of being. Understanding the end goal, and the morphology of the real problem, we can construct various hierarchies of experiments that seek to define the problem statement. Putting many thought experiments together can perhaps help us model closer to reality than a single experiment instance would. Not all possible combinations will be useful, and we can eliminate them using logical analysis. Those that remain, we can model mathematically. The math can be as simple or complex as required — it may recognize the experiment specification, external circumstances, the involvement of the agents, and so on.

Finally, the full model can be composed by adding the agents in the manner that they may appear in real-life. E.g. in the case of autonomous cars, we may seek to include various stakeholders, namely the driver, the passenger, the pedestrians, the lawmakers, and a reference general public. The models of agents will display their differences. It also depends, on the exact simulation to be conducted, what the nature of agents is, what their decision-making capacities are, and what their relevance is.

An addendum to this model architecture is the ability to learn. The agent models, and in fact the core experiment models themselves, can be configured to learn from simulation results. This is where we need to be circumspect. What should the models learn?

If we embed preconceived notions into the learning methods, it becomes counterproductive (read utterly wasteful). Instead, we must account for known and unknown variety here as well. E.g. using two learning algorithms that value different things when readjusting the internal model [along the argument made in 10]. If the ethical quandary is unresolved, there is no reason to choose one strategy over the other. But in a simulation, unlike in physical reality (not quantum), we can choose to make both choices. We can then step in at various stages of the simulation to make our own judgement calls — cautiously so — and recalibrate the model for future learning. We may have learned, for instance, that some portion of the model is ‘out of limits’ in that it reflects reality with diminishing returns; we may choose to eliminate this portion.

Architecture for simulation of ethical quandaries — a simple diagram. Image by author.

Onward bound

When Prof. James Wilson’s arguments in his article first made me question why it is alright to accept the fact that thought experiments in their present form are insufficient sole proprietors of the philosophical method, my mind immediately ran across the spectrum of possible simulation methodologies. And although agent-based modelling stood out to me almost instantly, I knew where the issues were concealed. The simulations would be decidedly complex, but computational resources are now a non-issue. The real challenge will always be in the formulation of the problem — which constructor experiments to use, which known psychological phenomena to include or exclude, which agent behaviours to allow, and so on.

To create artificial reality, computational resources would need to be infinitely larger than what we possess. But that is beyond what is necessary. We can simply abstract from versions and slices of reality.

Very few authors, it seems, have touched upon the applicability of computer simulations — more specifically agent-based models — to the evaluation of ethical quandaries and their relevance to real circumstances [11]. Among them, Jeremiah Lasquety-Reyes (Universität Hamburg) has written recently (in 2018) about the exact same possibility — that of adopting ethical analysis via large-scale agent-based modelling [4, 5]. He cites prominently other, very limited, recent works — those of Mike Loukides [17], Steven Mascaro, Peter Danielson, and Alicia Ruvinsky. There is strong reason to believe that this sub-field, if its existence is peer-agreed, is inchoate.

Photo by Fiona Smallwood on Unsplash

In previous explorations of simulation of ethics, researchers and philosophers have resorted to making intrinsic choices for the initial formulation. Some authors chose to model agents assuming the utilitarian model in the agents’ actions to be of foremost importance. This assumption is simply a reflection of their own personal beliefs about such a model’s own utility and relevance; this implementation is not an unassailable argument for its continued use. In agreement with [4], it is unlikely that ‘act utilitarianism’ is the sole useful descriptive model — albeit more complex, we can assume many different models. With increasing compute resources, we can safely bet on higher complexity.

The act of having such inherent choices is inconsequential when we simply seek to demonstrate the plausibility of ethics simulations. But its consequentiality comes to light when we seek to deploy the model for its intended purposes. In the case of this example, as it evolves, the model will likely not concede valuable information about certain aspects of utilitarianism — the utilitarian math is already at the heart of the model. Thus, the challenge of formulation is two-fold: we must minimize detrimental backdoors, and we must choose the correct abstractions of known phenomena while composing the model.

Evaluating possible outcomes doesn’t answer what the ethical quandary asks — it doesn’t solve and give us the truth. But the analysis can help contextualize it to a known set of participants — we can use predicted variations to understand evolutionary progression of ethics, study emergent properties at the macro level, and study reduction to the level of the individual decision-maker.

“Responsible thinking requires calibrating our levels of credence to the reliability of our intellectual tools.”

— Prof. James Wilson

Prof. Wilson finally warns us that we must humbly take into consideration the reliability of our intellectual tools. As much as modern simulation technology might one day be of immense value to the field of philosophical ethics, exercising a good measure of humility with new architectures of thought experiments would not be futile.

About the author

Sarang Deshpande is an engineer, founder [Flow Mobility; Cambio Motion], and writer. This trifecta allows him to be usefully interdisciplinary in his approach. Besides spending time solving challenges in the urban mobility domain, he regularly writes about science, tech, business, and life (sometimes). He is an editor at World In Mind, a publication which brings cutting-edge research to students and working professionals. Important research across industries will set the tone for humanity’s future trajectory, and young humans would do well to keep the world in mind when they choose their area of professional focus.

References

(not alphabetical)

[1] Wilson, J., What is the problem with ethical trolley problems, Aeon Magazine
Available: https://aeon.co/essays/what-is-the-problem-with-ethical-trolley-problems

[2] Trolley problem, Wikipedia
Available: https://en.wikipedia.org/wiki/Trolley_problem

[3] Agent-based model, Wikipedia
Available: https://en.wikipedia.org/wiki/Agent-based_model

[4] Jeremiah A. Lasquety-Reyes, Computer Simulations of Ethics: The Applicability of Agent-Based Modeling for Ethical Theories, European Journal of Engineering and Formal Sciences Articles, European Center for Science Education and Research, vol. 2, EJEF May 2018
Available: https://ideas.repec.org/a/eur/ejefjr/23.html

[5] Jeremiah Lasquety-Reyes, Computer Simulations of Virtue Ethics — Simplicity versus Complexity
Available: https://homepage.ruhr-uni-bochum.de/defeasible-reasoning/ABM-Phil-abstracts/Lasquety-Reyes-long.html

[6] Greene, Joshua D., Solving the trolley problem, In: A Companion to Experimental Philosophy, First Edition, John Wiley & Sons 2016
Available: https://projects.iq.harvard.edu/files/mcl/files/greene-solvingtrolleyproblem-16.pdf

[7] Schmidt, B., Modelling of Human Behaviour — The PECS Reference Model, Proceedings 14th European Simulation Symposium, A. Verbraeck, W. Krug, eds., SCS Europe 2002
Available: http://www.scs-europe.net/services/ess2002/PDF/inv-0.pdf

[8] Urban, C., Schmidt, B., PECS — Agent-Based Modelling of Human Behaviour, From: AAAI Technical Report FS-01–02. Compilation 2001
Available: https://www.aaai.org/Papers/Symposia/Fall/2001/FS-01-02/FS01-02-027.pdf

[9] Mirnig, A., Meschtscherjakov. A., Trolled by the Trolley Problem: On What Matters for Ethical Decision Making in Automated Vehicles. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 10 pages.
Available: https://dl.acm.org/doi/10.1145/3290605.3300739

[10] Edmonds, B., The Possible Evolution of Empirical ABMs
Available: https://homepage.ruhr-uni-bochum.de/defeasible-reasoning/ABM-Phil-abstracts/Edmonds-long.html

[11] From thought experiments to Agent Based Models and calibration. Reflecting (on) the many facets of simulations in economics
Available: https://journals.openedition.org/oeconomia/2947

[12] Thomson, Judith J., A Defense of Abortion, In: Philosophy & Public Affairs, Vol. 1, №1 (Autumn, 1971), pp. 47–66
Available: https://philosophyintrocourse.files.wordpress.com/2013/03/thomson_abortion.pdf

[13] A Defense of Abortion, Wikipedia
Available: https://en.wikipedia.org/wiki/A_Defense_of_Abortion

[14] Utility monster, Wikipedia
Available: https://en.wikipedia.org/wiki/Utility_monster

[15] Mere addition paradox, Wikipedia
Available: https://en.wikipedia.org/wiki/Mere_addition_paradox

[16] Brun, G., Thought Experiments in Ethics, Michael T. Stuart; Yiftach Fehige; James Robert Brown (eds). 2017. The Routledge Companion to Thought Experiments. Abingdon/New York: Routledge. 195–210
Available: http://philsci-archive.pitt.edu/13298/1/Brun-ThoughtExperimentsInEthics.pdf

[17] Loukides, M. On computational ethics: is it possible to imagine an AI that can compute ethics? O’Reilly Media 2017
Available: https://www.oreilly.com/radar/on-computational-ethics/

--

--

Sarang Deshpande
World in Mind

Founder @ Meiro Mobility | Curiosity doesn’t kill the cat — it’s only opening the box that does. Sometimes.