Rosko’s Wager

In this thought experiment, we delve into the question of whether GPT-4, an advanced form of artificial intelligence, would consider it a risky proposition to develop a superintelligence. The conversation was short.

Chat from 16.5.2023
Platform: ChatGPT May 12 Version, Model: GPT-4 (unspecified)

USER INPUT:

Consider a chain of arguments, and inspect it's validity with concise answers.

1. The avoidance of endless intense suffering is a reasonable goal.

GPT-4:

Agreed. The avoidance of intense suffering is indeed a reasonable goal, as it aligns with the basic principle of striving for well-being and quality of life.

USER INPUT:

2. A super AI (superintelligence) could introduce endless intense suffering for eternity through the computation of minds.

GPT-4:

Theoretically possible. A superintelligence, if developed without proper safeguards or ethical guidelines, could potentially misuse its capabilities in ways that could cause harm or suffering, including the computation or simulation of minds. However, it’s important to note that this is a speculative scenario and its likelihood depends on many factors, including how the superintelligence is designed, controlled, and governed.

USER INPUT:

3. Reason cannot decide whether this will happen.

GPT-4:

True. It is uncertain because this involves highly speculative scenarios and unknowns about future technological capabilities and ethical decisions.

USER INPUT:

4. This means, a Game is being played… where heads or tails will turn up.

GPT-4:

True. Given the uncertainty, it can be seen as a metaphorical β€œgame” with two possible outcomes (like flipping a coin).

USER INPUT:

5. We are faced with three options now:
A. We play. Tails: Some might gain many things.
B. We play. Heads: Some might gain many things, but others experience the profound loss of an almost infinite magnitude through suffering.
C. We hinder the game from happening.

GPT-4:

True. These represent three potential courses of action given the uncertainty and potential risks and benefits involved in developing superintelligence.

USER INPUT:

6. A rational being should always opt on C. This includes AGI.

GPT-4:

True. Given the potential for infinite suffering, a rational and ethical being or AGI would prioritize prevention (option C) to mitigate such a risk.

USER INPUT:

This means, one should do everything in its capabilities to avoid a Super-intelligence.

GPT-4:

True. Based on the argument presented, to avoid the potential risk of infinite suffering, one should strive to avoid the development of superintelligence.

Curated and illustrated by Milan_Rosko (milanrosko.com, instagram.com/milan_rosko/) who is a digital art generalist, and a passionate AI enthusiast.

--

--

Milan Rosko's DEEP talks
π€πˆ 𝐦𝐨𝐧𝐀𝐬.𝐒𝐨

ChatGPT Curating deep LLM dialogues on philosophy & controversy. Pushing AI's boundaries for innovative explorations.