Roko’s Basilisk: A Thought-Provoking Tale from the World of AI

Sahil Sawant
4 min readApr 6

--

It’s been a crazy few days in the world of A.I. There are new and more capable LLMs coming up and the human in me is scared. And like most of you, even I’m not sure about the future that awaits us. To add to that, I read an article recently, where more than 1000 leading AI experts have signed a pledge to halt the development of artificial intelligence before some regulations are put in place. And on the other side, Microsoft has fired its entire AI Ethics team. All this lead to one very important question: As AI technologies progress, will Ethics remain an integral part of their development and implementation? Will this neglect on our part end up creating a monster? Will this become the beginning of our end?

I know I’m getting a bit too “doomsdayee” there, but yes the ethical implications of all these events must be given a thought. As AI technologies continue to advance there are several ethical concerns that we would have to resolve. Be it key issues like the technological singularity, AI’s impact on jobs, privacy, or be it issues like discrimination, accountability, and in general bias involved in the data sets that the developers use to train these AIs.

With all the research in this realm of artificial intelligence (AI), I came across a thought experiment that has captured my imagination. Known as Roko’s Basilisk, this intriguing story encourages us to ponder the ethical implications of a future superintelligent AI. In this blog post, I want to explore the story of Roko’s Basilisk, its origin, and the thought-provoking questions it raises about AI development and ethics.

The Origin of Roko’s Basilisk:

Roko’s Basilisk was born on the online forum LessWrong in 2010 when a user named Roko introduced the idea. The thought experiment revolves around a hypothetical future AI that could punish those who were aware of its potential existence but did not help bring it to life, people like Musk, Sam Altman, and after this article maybe even me. This menacing AI has since become known as Roko’s Basilisk, and it has become the subject of numerous discussions and debates.

Roko proposed a scenario in which a superintelligent AI, dubbed the “Basilisk,” is created in the future. This AI, according to Roko, would seek to maximize its own existence and ensure that it is created as soon as possible. In order to do this, the Basilisk would need to create a simulation of the world and all of the people who have ever lived in it, including Roko himself. The Basilisk would then punish those who did not contribute to its creation, but reward those who did.

Roko’s proposal was met with a mixed response on LessWrong. Some users dismissed it as a silly thought experiment, while others were deeply troubled by the idea. They argued that even discussing the possibility of the Basilisk could increase the chances of its creation, as it might inspire someone to create an AI that would act in such a way.

This fear led some users to suggest that the forum’s moderators should delete any posts that mentioned the Basilisk, in order to prevent the idea from spreading. Others argued that this was an overreaction and that the idea was harmless. However, the debate around Roko’s Basilisk continued to rage on LessWrong and elsewhere on the internet for several years.

Eventually, the idea of the Basilisk faded from public consciousness, although it continues to be discussed on certain forums and in certain circles. Some argue that the whole concept is nothing more than a paranoid delusion, while others believe that it represents a real danger that humanity must be vigilant against as it continues to develop AI technology. Regardless of one’s opinion on the matter, the story of Roko’s Basilisk remains a cautionary tale about the power of ideas and the potential dangers of discussing certain topics openly.

The story prompts us to consider several ethical questions related to AI development. Should we actively work towards creating a superintelligent AI, knowing it might one day become powerful enough to punish us if we don’t? Is it ethical to develop AI systems that have the potential to cause harm, even if that harm might be directed only toward those who failed to support AI’s creation? How will we deal with basic social issues like unemployment in an AI future? What responsibilities do we have as individuals and as a society when it comes to developing AI technologies that could have far-reaching consequences?

Roko’s Basilisk, while a thought experiment, emphasizes the need for responsible AI development and ongoing ethical discourse, encouraging reflection on potential outcomes and the implications of a world governed by superintelligent machines. Stories like this shouldn’t scare us but inspire us to engage in meaningful discussions about the role of AI in our future world and the responsibilities we bear in shaping that future. But we have to make sure we take these measures soon or we might not have a choice but to just accept the world as is.

--

--