The Powerlessness of Unreality: Methods to Amplify Transparent Intelligence

katoshi
Neo-Cybernetics
Published in
11 min readApr 6, 2024
Photo by Dan DeAlmeida on Unsplash

The advancement of artificial intelligence technology is expected to eventually create intelligences that surpass human intellect. While there is hope that this will be the key to solving problems that human capabilities cannot resolve, there is also concern over unknown threats.

The issue here lies in how to perceive the benefits and risks. Given the nature of the risks associated with artificial intelligence, the conventional way of assessing risks and benefits of normal technologies and events is insufficient. However, the voice raising concerns about these risks is somewhat loud, it has not become a mainstream thought among AI experts, in the media, or among political leaders.

This is not due to a lack of information or knowledge, but rather is believed to be due to fundamental human thinking patterns and structural mechanisms embedded in our current society. We and our society can take measures against threats and risks we have experienced in the past with a sense of reality, but we are vulnerable to unprecedented threats. We are good at learning from failures, but not at preventing failures from happening in the first place.

This article expresses this as the powerlessness of unreality and suggests that the concept of transparent intelligence becomes important as a capability to respond to it. Transparent intelligence is rare and inherently difficult to acquire. Therefore, social mechanisms that amplify the insights of those with such rare capabilities to be understood by many people become important. As one idea for such a mechanism, this article proposes the strategy of a decentralized core team.

The Powerlessness of Unreality

Real events that occur before our eyes have the power to move people.

However, it is difficult to move people with words describing things that have not happened. If people cannot be moved, things will proceed with the status quo. This is referred to as the powerlessness of unreality.

The powerlessness of unreality becomes a significant problem in cases where maintaining the status quo leads to a sudden catastrophe from which there is no turning back.

If there are experiences or evidence that something has happened in the past, it is possible to move people based on that. However, in unprecedented cases, there are no experiences or evidence.

In such cases, it is not possible to change the status quo, and as a result, a catastrophe occurs.

Of course, if the catastrophe itself cannot be foreseen, there is no way to avoid that outcome.

On the other hand, even if a catastrophe can be logically predicted with a very high probability, if there are no precedents and no signs of it occurring before our eyes, it does not have the power to move people, and as a result, it is not possible to prevent the catastrophe.

Of course, visualizing logical reasoning through simulation based on theoretical models can help in understanding. However, if the simulation cannot be understood and the accuracy of the model cannot be trusted, it does not have the power to move people.

This means that even if a crisis can be foreseen, if a sudden unprecedented catastrophe occurs or if signs only appear when it is already too late to avoid the catastrophe, it is not possible to muster the power to prevent it.

Strengthening the Powerlessness of Unreality through Social Structures

The powerlessness of unreality is particularly difficult to overcome in democracies where many are involved in decision-making.

As a means of overcoming this, enhancing awareness by providing information about crises and strengthening science communication to increase trust in scientists and experts can be considered.

However, for threats that are unrealized and for which there are no signs, it is difficult to provide information or build trust in scientists and experts.

Especially in democracies where freedom of speech is guaranteed, scientists and experts skeptical of the threat can freely express their opinions. This leads to debates about which opinion is correct.

This turns into a matter of selective preference, unrelated to scientific validity, about which opinion one wants to believe or which person expressing an opinion one wants to trust.

And, the powerlessness of unreality fails to draw the power for crisis response, leading to a preference for maintaining the status quo.

This power relationship is asymmetric. It easily flows towards maintaining the status quo, but requires great effort to recognize a crisis and take measures.

And, no matter how much effort is put in, a group that does not agree with the response to the crisis becomes entrenched. If this group is large, overturning opinions to make decisions to respond to the crisis in a democracy can, in principle, become impossible.

The Societal Recursiveness of Unreality’s Powerlessness

The difficulty of democratic societies in addressing threats that are unprecedented, invisible, or too late when signs become apparent, ties the hands and feet of those attempting to solve the problem.

This is because the act of raising the issue and aiming for a solution itself cannot gain power due to the powerlessness of unreality.

In other words, attempts to strengthen scientific communication, increase understanding of threats, or enhance trust in scientists are easily swept away by the voices of skeptical experts, making it difficult to move forward.

Transparent Intelligence

To grasp this issue more deeply, it is necessary to understand not the people who do not give power to respond to foreseen threats, but rather those who are willing to exert their effort in response to these threats.

This is because investing effort in something that should be powerless because it is unreal could be considered rather peculiar.

If there is no precedent and no sign, the effort to respond to a threat is likely to arise from logical foresight, strong personal convictions or prejudices instilled by others, personal gains from asserting the threat, or some form of coercion by others.

The power derived from logical foresight can only emerge if one truly believes in the correctness of that logic. Moreover, if one can maintain this power even when faced with objections from others, it implies a strong confidence in one’s own ability to believe in that logic.

For someone who truly believes in the logic of predicting a threat and has confidence in their own logical judgment capability, foreseeing the threat is like it is happening within a glasshouse of intelligence. This transparent intelligence allows for the intuitive understanding of risks with crystal clear logic visible from any angle without any cloudiness.

With such reliable transparent intelligence, it becomes possible to exert effort to respond to threats based on foresight.

The Difficulty of Transparent Intelligence

Without such transparent intelligence, the power to counter unreal threats can only be generated through convictions, prejudices, personal gains, or coercion.

And this unclouded transparent intelligence is not a common ability but is quite unique.

Just as some people are naturally good at mathematics while others are not, it requires a kind of innate sense. Therefore, it is not an ability that can be easily acquired through education or by enhancing scientific communication.

Of course, efforts to increase the number of people with transparent intelligence through education or training are very important for gaining significant power against unreal threats. However, this alone is insufficient as a countermeasure against the powerlessness of unreality rooted in human thinking patterns and societal structures, and it takes too much time.

There may be objections to this viewpoint. Some argue that with proper education, the ability to understand unprecedented threats can be developed. However, this is the perspective of those who already possess transparent intelligence, possibly overlooking the difficulty for those who do not have such abilities.

The Rarity of Transparent Intelligence

Transparent intelligence is not directly related to the amount or depth of knowledge. It is the ability to think clearly and purely about what can be inferred from facts.

For example, one may come across arguments attempting to apply the concept of balancing risks and benefits to unprecedented threats.

If there were precedents for the threat, it would be possible to estimate the probability of its occurrence and the expected damage. Or, if a theoretical model could be constructed, the occurrence probability and damage could be estimated.

However, there are threats for which such estimates cannot be made. Particularly, threats with limitless potential damage, such as those posing a crisis to human survival, are exceptional.

There are also threats for which it is impossible to model occurrence probabilities. These are threats that involve complexities beyond the capability of simulation.

Moreover, if the probability of a threat occurring does not decrease over time, even if the probability is low, the more events related to the threat occur, the higher the probability of the threat materializing becomes. Such threats are, despite appearing to have a low probability, guaranteed to occur eventually.

Threats with limitless damage and unknown or unchangeable occurrence probabilities cannot be balanced against benefits.

This is a purely logical conclusion, unrelated to what the specific threat may be. It can be said that for threats that meet these conditions, regardless of their specific causes, balancing risks and benefits is not feasible.

Consider, as a thought experiment, an endless game of Russian roulette with a revolver where the number of bullets is unknown to you, and the organizer can insert or remove bullets at any time. This implies an unknown probability of risk that does not decrease. Who would take the challenge if a substantial reward were offered?

Continuing this Russian roulette guarantees eventual ruin unless the organizer chooses not to insert any bullets, or one’s luck lasts forever.

The development of advanced artificial intelligence fits this type of risk. The emergence of artificial intelligence surpassing human intellect brings limitless potential threats, and the complexity is too great to model or estimate these threats’ occurrence probabilities. Furthermore, there is no guarantee or expectation that the threat’s occurrence probability will decrease over time.

This is akin to the aforementioned Russian roulette scenario. However, the development of advanced artificial intelligence is being driven by many individuals and organizations. This, I believe, is strong evidence of the extreme rarity of transparent intelligence.

The fact that some experts have pointed out the limitless risk potential of artificial intelligence must be known to many researchers and political leaders, and is occasionally reported in the media. The lack of significant societal movement commensurate with this risk indicates that most people do not recognize this unknowable risk, similar to the Russian roulette scenario.

Mass Realization

The powerlessness of unreality and its recursive nature, along with the rarity of transparent intelligence, pose serious problems for threats without precedents or signs.

People with transparent intelligence have the capability to perceive the unreal as real through logical reasoning and confidence in their abilities.

This can be considered a capability for logical realization.

The question is how to achieve this realization in a democratic society. As mentioned before, cultivating transparent intelligence through education and training is important, but time-consuming and potentially limited in effect.

Therefore, a societal mechanism that enables a large number of people to realize unreal threats as real, without directly acquiring transparent intelligence themselves, is essential. This could be called mass realization.

Key to mass realization are those with the rare capability of transparent intelligence and mass communication tools like the media and social networks, which serve as societal devices for amplification.

The ideal of media oriented towards mass realization is not just to pursue the sharing of truth but to widely disseminate insights derived from transparent intelligence about unreal threats, regardless of precedent or warning signs.

Thus, amplifying the subjective and rare capability of transparent intelligence through media oriented towards mass realization should be the fundamental strategy against the powerlessness of unreality.

Decentralized Core Teams

Even with media cooperation, it is exceedingly difficult for just one person with transparent intelligence to achieve mass realization.

Overcoming the powerlessness of unreality to achieve mass realization requires a different approach from traditional scientific communication, which focuses on researching scientific facts and disseminating them to many people.

First, rather than deeply pursuing facts from one perspective, it is essential to broadly and meticulously construct the essential logic related to the threat to prevent criticism from skepticism. It is also important to express this without losing logical structure, combining intuitive or narrative methods, so it is accurately conveyed to many people, regardless of their transparent intelligence. Simultaneously, delivering a powerful message that persuades people to positively address the immensely difficult and uncomfortable realities represented is necessary.

These tasks require too broad and diverse a range of skills for one person to manage.

Therefore, forming a team with several individuals who possess these essential core skills is ideal. This team will be referred to as a core team. Ideally, multiple such core teams would exist throughout society, each forming a network.

The overall strategy I envision involves these various core teams influencing each other and approaching many people through the media while experimenting, thereby progressing mass realization.

Strategic Essentials

The societal structure of multiple decentralized core teams allows each core team to form freely, whether grassroots or with support from governments or organizations. This is strategically crucial because it decentralizes trust, a stark contrast to approaches that aim to form trust based on specific positions or statuses. This decentralization is key to overcoming the powerlessness of unreality.

Decentralized core teams evaluate each other’s claims based on transparent intelligence. This evaluation is about how far logical reasoning can stand in issues lacking scientific evidence or precedents. Since transparent intelligence is the ability to confidently make multifaceted logical deductions even without precedents, though inherently personal, it can yield fundamentally the same results among those who possess it.

The focus here is not on scientific or academic correctness but on validating the appropriateness of policies derived from such reasoning. Science and academia can say nothing about the unknown and can only acknowledge the uncertain as uncertain. However, policies can be discussed precisely because they are unknown or uncertain.

In this respect, practitioners or implementers are more suited as core team members than scholars or researchers.

For example, analyzing problem structures as a core skill requires intricate and multifaceted systems thinking, making individuals with experience as system architects suitable.

Effective communication to make complex issues understandable to many is best done by practical artists or those with experience in the field of expression.

The part about delivering powerful messages to move people could benefit from the experience of visionary entrepreneurs who have achieved success in their ventures.

These skills are not only applicable in practice but also require foundational knowledge in natural sciences, humanities, and social sciences. Therefore, core teams can also serve as liaisons to leverage the wisdom of experts and scholars in various fields, further sophisticating their functions.

The ability of core teams to gather interdisciplinary wisdom is another strategically crucial point.

In Conclusion

The strategy proposed here is at a rough conceptual design stage, with its feasibility unknown and challenging to prove without implementation.

For this framework to function effectively, it is crucial that individuals involved in core teams and media are motivated by strong incentives. Mere ideals or ethical motivations cannot form a robust structure, nor can they ensure repetitive reinforcement through feedback loops.

This is a lesson from the field of economics, where capitalism’s success factors have been learned. Comparing planned and free economies, it has been almost proven that free economies are advantageous due to the incentives for economic agents. Even if starting with a top-down approach to gain momentum, ultimately, the setup must naturally work based on individual incentives for core teams and media to be effective. This requires deeper consideration.

Furthermore, the concept of societal structure can draw parallels with examples like the separation of powers. Its essential structure being simple and understandable, and customizable to some extent while maintaining its essence across regions and cultures, is crucial. Therefore, the form of core teams should not be absolute; their composition, size, and methods should be free and flexible.

However, the critical aspect here is the methodology by which core teams evaluate each other’s claims. This part needs the development of standard procedures or protocols and their continuous improvement. Any cloudiness or bias here could become a fatal flaw in the strategy.

While many challenges and considerations remain, and the strategy proposed here may not be the sole solution, I believe more refined strategies and a combination of simultaneous actions are necessary.

However, not limited to artificial intelligence but including other significant technological developments and environmental issues, the problem of the powerlessness of unreality is a high-priority challenge for future societies. Overcoming this issue, regardless of the variety of strategies and concrete measures, will necessitate integrating methods to amplify transparent intelligence into society.

--

--

katoshi
Neo-Cybernetics

Software Engineer and System Architect with a Ph.D. I write articles exploring the common nature between life and intelligence from a system perspective.