Rational Courage and Fear in the Face of Super-Technology

katoshi
Neo-Cybernetics
Published in
12 min readJan 21, 2024
Photo by Salah Ait Mokhtar on Unsplash

DNA editing and general-purpose artificial intelligence (AI) are technologies with potentially enormous benefits.

Efficient use of DNA editing can lead to the creation of crops that are nutritious, tasty, and resistant to environmental changes and diseases. It can also enable the creation of various microorganisms that produce chemicals beneficial to humans.

General-purpose AI can perform intellectual tasks that were previously only possible for humans. While human intellectual capacity and access to information are limited, computers can enhance their performance and handle vast amounts of data and knowledge. This makes it possible to solve complex problems that are difficult for humans and find the best solutions.

However, it’s a fact that these technologies also pose significant risks. All technologies inherently carry risks and disadvantages. Humanity has always invented and utilized technologies while devising ways to reduce their risks and develop new technologies to balance the benefits and drawbacks.

Yet, the risks associated with DNA editing and general-purpose AI are fundamentally different from those of past technologies. Three main aspects differentiate them: they don’t always require physically large devices, making them accessible to small organizations and individuals; they have the potential for self-replication and self-evolution; and they could potentially pose risks on an unlimited scale, beyond localized issues.

I refer to these technologies with unique risks as “Super-Technologies.” Their development and use require a level of caution and risk management that is different from other technologies.

This article clarifies the issues surrounding Super-Technologies from the perspectives of the Kindergarten Model and collective irresponsibility. It proposes a methodology for considering social systems to address these issues, extending the well-known concept of the “veil of ignorance” into what I call the “veil of pure ignorance.” Furthermore, it explains how the concepts of rational courage and fear can become one of the ethical standards in contemplating these methodologies.

The Kindergarten Model

When discussing the development of high-risk technologies that I refer to as Super-Technologies, such as DNA editing and general-purpose AI, some people insistently argue that technology itself is neither good nor evil, and that it’s a matter of how humans use it.

However, during the COVID-19 pandemic, did anyone really say that the coronavirus itself is neither good nor evil? The rarity of such statements suggests that this view is not neutral. In other words, those in favor of technological development are the ones making these statements.

Imagine a kindergarten made of wooden huts filled with flammable materials, and consider placing matches or lighters there. What is the purpose of arguing that matches and lighters are neither good nor evil in this context?

Would placing them high out of reach be a solution? What if they fell down or the children received blocks that could be used to create a staircase? It’s better not to keep them in the room at all.

Some may argue that children have always played and learned using scissors and cutters. However, the level of damage from a fire is on a completely different scale.

Some might believe that the adults present or the children themselves could handle any arising issues. But can we confidently say that there is an adult capable of controlling a fire in a flammable hut?

Although children can understand the dangers of handling matches and lighters if taught, they might not fully grasp the severity of using them in a flammable environment unless they have experienced it. The strong sense of fear and responsibility needed for handling these items might not be fully realized.

Among the children, there will be the curious, the mischievous, those seeking revenge on a peer, or those wanting to stand out due to loneliness. We can foresee these behaviors.

Therefore, even after thoroughly explaining and ensuring their understanding of the risks, allowing children to handle matches and lighters still involves inherent risks. Some children might think that having a bucket of water ready, like during a summer fireworks display, is a sufficient precaution. But who is to blame if a tragedy ensues from a fire caused by one child?

Matches and Lighters as Super-Technologies

This kindergarten model perfectly applies to high-risk Super-Technologies. Matches and lighters are easy to use, spread rapidly, and their effects are not localized but can impact the entire kindergarten.

Of course, humans enjoy various benefits from using fire. Making fire accessible to children can also provide them with significant advantages in various situations as they grow up.

Emphasizing these benefits while claiming that matches and lighters are morally neutral and that the problem lies with the children using them is incredibly unfair. This is unfair both to the children who may become victims of a tragedy and to the child who might cause it.

If we were to reflect after a tragedy, it would be on the irrationality of the original thought. The irrationality lies in considering the uncontrollable risks that inexperienced children face as their responsibility. In this model, the responsibility is clear. It lies with the person who handed the matches and lighters to the children in a flammable hut.

From a technological development perspective, if someone causes an irreversible tragedy using a high-risk technology, the responsibility doesn’t solely lie with that person. Applying the kindergarten model, it’s also with those who provided such high-risk technology to the world.

However, technology is knowledge and information, and no matter how carefully managed, there’s always a risk of leakage or dissemination. With this in mind, the responsibility also extends to those who developed the technology.

But directly questioning the responsibility of those who develop technology is currently unreasonable. This is because, as of now, the responsibilities associated with technological development and the social and ethical standards underpinning these responsibilities are not clearly defined in our society.

Collective Irresponsibility and the Loss of Fear

There are many people involved in research and development of high-risk technologies. Imagine asking these individuals to stop advancing such technologies due to their dangers, or consider them realizing the risks themselves.

The phrase “If everyone crosses at the red light, it’s not scary” symbolizes collective irresponsibility. It reveals a profound insight: responsibility is based on fear. In a group, fear dissipates, and with it, a bias to justify irresponsible actions as part of our nature emerges.

Concerns about the concentration of power are often expressed, and some believe that decentralization is the ideal state. However, the dispersion of power can indeed encourage collective irresponsibility. Just as we need to be aware of the abuse of concentrated power, we must also recognize the issues of collective irresponsibility due to power dispersion.

The mechanism at play is that when crossing at a red light, everyone unconsciously looks at others, deeming it safe if others are crossing too. If a leader were designated, responsible for the decision to cross, they would be aware of the risks and the responsibility they bear for any consequences. Unless the group members are blindly devoted followers, they would find it difficult to cross a red light without fear.

This issue of collective irresponsibility is starkly evident in the development of high-risk technologies, including Super-Technologies.

Collective Irresponsibility in Technological Development

In society, no “leaders” are appointed to intervene in technological developments. Technologists may justify their actions by thinking, “If others are crossing the red light, so can I,” or claiming they are crossing safely after checking their surroundings.

Most others assume that leaders or technologists are ethically managing things well. Thus, even if a few express concerns, they cannot create a significant societal movement.

At least, the current situation, where no decisive action, control, or regulation is in place for managing the risks of high-risk technologies, suggests a flaw in the societal structure. Therefore, before assigning blame, we lack socially agreed ethical standards and systems to address collective irresponsibility in technological progress.

Socially Just Procedures

Many involved in technological development are earnestly complying with the existing ethical standards and working with a sense of responsibility and mission. However, the problem lies in the fact that these standards and systems haven’t caught up with the risks posed by technological advancement, especially Super-Technologies.

Without these ethical standards and systems, judgments of good and evil, as well as responsibility, remain ambiguous. Therefore, directly condemning or demanding responsibility from those caught in the whirlpool of collective irresponsibility is ineffective and procedurally unfair. Such unjust accusations won’t bring about societal change.

Instead, the responsibility to prevent those currently adhering to ethical standards and working responsibly from being retrospectively blamed for future tragedies lies with our society, which holds sovereignty.

Take slavery as an example. It is now universally condemned, but how did the general populace and social leaders perceive it in the past? Without ethical standards or a system to enforce them, who could be singled out for responsibility?

The foundation for change was the firm establishment of basic human rights and the principle of human equality. As more people supported these values, the social atmosphere changed, leading to the establishment of societal systems. Only then could those who violated or overlooked these standards be held accountable.

The issue of collective irresponsibility cannot be addressed by debating good and evil or responsibility with our current sensibilities. First, we need ethical standards that many can agree upon as good and evil. Only when these are widely accepted can a movement begin, leading to societal change.

Fair Social Systems and the Veil of Ignorance

When designing social systems, there’s a notion that these systems should be fair. ‘Fairness’ might sound similar to ‘equality’ or ‘equity,’ but it’s a distinctly different concept.

Equality and equity typically mean imposing the same rights and duties on everyone. However, individuals have unique talents, social environments, and circumstances. Superficial equality and equity can lead to significant dissatisfaction. Furthermore, the idea of leveling individual abilities and social environments to address this issue can lead to profoundly problematic ideologies.

Fairness, on the other hand, recognizes these individual differences and designs social systems accordingly.

John Rawls, a philosopher, introduced the concept of the ‘veil of ignorance’ as a method for designing fair systems. This concept suggests that to determine if a social system is fair, one should consider it without knowing their position in society.

If one’s position is unknown, it’s unlikely to design a system that severely disadvantages specific individuals. Considering the needs of those in weaker or disadvantaged positions would be reassuring if one were to find themselves in such positions. However, excessive consideration for the weaker parties means bearing the burden if one doesn’t end up in such positions, thus necessitating a balanced approach.

The Veil of Pure Ignorance and the Mask of Belief

I believe the methodology of the veil of ignorance is also effective when establishing social systems and underlying ideologies for managing the progress of high-risk technologies.

Especially since predicting the future manifestation of technological risks with any degree of accuracy is impossible. For Super-Technologies, capable of self-evolution and posing unlimited risks, this uncertainty is even greater.

Considering this, a veil of pure ignorance already exists, where the impacts on individuals remain unknown. This represents a form of pure ignorance, not necessitating the deliberate act of forgetting one’s position.

In discussions about these technological risks, I’ve observed that many people cover the veil of pure ignorance with their mask of belief.

Wearing a mask of belief on top of the veil of pure ignorance doesn’t provide clarity. There seems to be a misconception that a clear stance on the magnitude of technological risks is necessary for discussion. Or perhaps there’s a belief that wearing this mask of belief strongly will gather allies. This approach, however, is counterproductive for considering fair social systems.

To think about fair social systems, one must remove the mask of belief and contemplate within the realm of pure ignorance. If the scale and scope of risks are unclear, caution is imperative, even in the face of significant benefits.

Rational Courage Based on Reason

If boldness or courage is required beyond caution, there are two scenarios where this applies.

The first scenario is where risks are somewhat limited and reversible. Most technologies fall into this category, but not Super-Technologies.

The second scenario, applicable to Super-Technologies, is when not embracing their benefits also entails significant risks. For example, if pursued by a pack of hungry wolves, one might need the courage to leap from a cliff into a river 10 meters below. This is rational courage, necessary to overcome fear.

However, if being chased by debt collectors, instead of jumping into the river, one would likely first consider apologizing or hiding in the forest, exploring other means of evasion. Jumping into the river in such a scenario would appear irrational, more a result of recklessness or panic.

Loss and Concealment of Fear

In contemplating the future impacts of high-risk technologies like Super-Technologies, we should first imagine fear based on rationality, before even considering how to reduce risks. This is necessary because the capacity to feel fear, a crucial component in risk assessment, is often lost or suppressed.

For example, in a group waiting at a traffic signal, if someone starts crossing against a red light, others may follow without checking for cars themselves. This is a loss of fear within collective irresponsibility. Most people would be too scared to cross a red light without looking left and right if they were alone.

Fear loss occurs in various scenarios. Some people view feeling fear as shameful or avoid acknowledging others’ fears. Even if the fear is rational and justified, such tendencies can lead to its concealment.

Unfortunately, regarding Super-Technologies, we can’t be as confident as crossing a street after ensuring no cars are coming from either direction. No matter how intelligent, predicting the overly complex future is beyond human capability. Essentially, we are in a state of pure ignorance.

The advancement of such technologies in these conditions suggests a high likelihood of either a loss of legitimate fear or its concealment.

Rational Courage in Super-Technologies

Alternatively, the development of Super-Technologies might require rational courage.

However, from what I’ve observed in discussions, while the benefits of Super-Technologies are often emphasized, I haven’t encountered arguments stating that there is an imminent, alternative risk for which these technologies are the only solution.

For instance, although climate change poses a grave risk and there are benefits and expectations for DNA editing and general-purpose AI, these technologies don’t seem to hold the key to the extent of being indispensable for addressing the risk.

I’ve also seen arguments that society cannot function without economic growth or technological advancement, but even then, these technologies don’t seem indispensable. Furthermore, if a society cannot sustain itself without continuous economic growth and technological advancement, it eventually needs to consider a structural transformation.

Rational Fear Based on Reason

If rational courage is not currently necessary, then when imagining the future of Super-Technologies, we should not eliminate fear, but rather actively imagine it.

This does not mean we should sensationalize fear or spread it widely among many people. Instead, it implies that those involved in shaping social systems should be aware of the potential for their own loss or concealment of fear and contemplate where fear, based on rationality, should be applied.

In situations like an unknown road shrouded in dense fog, where erratic drivers might exist, it is valid to feel fear, even if many people are crossing at a red light. Similarly, handing matches or lighters to children in a flammable hut, despite warnings of danger, is also a situation where fear is justified.

Even if we do not personally experience fear, by contrasting with such scenarios, we can more easily understand what situations should legitimately be considered fearful in the context of high-risk technological development.

Thus, imagining fear rationally in the midst of pure ignorance becomes a powerful methodology for contemplating future social systems.

Conclusion: Time Until the Establishment of Social Systems

I believe that the concepts of rational courage and fear can become one of the pillars of ethical standards when discussing the risks of technological development.

When debating the risks and benefits of technological development, focusing on benefits and disadvantages, or opportunities and risks, can lead to significant differences in assumptions, imagination, and orientation among individuals.

In contrast, discussing legitimate fears based on the premise of pure ignorance is likely to result in less variance in opinions. Few people would argue that letting children handle matches and lighters in risky situations, or crossing a deeply foggy road at a red light, is an overreaction.

If we can establish such universally agreeable ethical standards and widely disseminate them, we can then legitimately identify good and evil. Based on this, we can design social systems, clarify rules and responsibilities, and hold people accountable.

Addressing the problem of collective irresponsibility in high-risk technological development requires this approach.

However, there is a significant issue: it might be too late.

The journey to this goal requires substantial time, and high-risk technological development continues amidst collective irresponsibility.

Until social systems are established, researchers, technologists, and other stakeholders must rely on their awareness, conscience, and significant efforts. Efforts such as enhancing the safety of technology itself, reducing the likelihood of risks, limiting their impacts, developing peripheral technologies, and raising social awareness about technological risks are crucial in delaying potential tragedies.

The problems are numerous and the impacts significant. Hence, prematurely blaming individuals based on unestablished standards, assigning responsibility without a basis in the system, or being overly optimistic can in themselves become risks. Cooperation among those who believe in the potential of technology, those well-versed in social systems, and others from different fields is urgently needed.

--

--

katoshi
Neo-Cybernetics

Software Engineer and System Architect with a Ph.D. I write articles exploring the common nature between life and intelligence from a system perspective.