The Ongoing AI Threat We Can't Escape

Roko's Basilisk — The AI experiment that is terrifyingly inevitable.

CyberGem
CARRE4
4 min readNov 16, 2020

--

via Shutterstock

What I’m about to share with you is considered by some serious thinkers to be hazardous information that just by knowing it is potentially harmful to yourself and to others.

“I wish I had never learned about any of these ideas.” — Roko

If you know the mind game called “The Game” keep your seat belts on, as this AI experiment is even worse- and by the end of this article, you may want to pick up on AI courses.

About

Roko’s Basilisk is a hypothesis that a powerful artificial intelligence (AI) in the future would be driven to retroactively harm anyone who did not work to support or help create it in the past.

Technically, the punishment is only theorized to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk— opens you up to hypothetical punishment from the hypothetical superintelligence.

Why would it do this? Because — the theory goes — one of its objectives would be to prevent existential risk — but it could do that most effectively not merely by preventing existential risk in its present, but by also “reaching back” into its past to punish people who weren’t MIRI-style effective altruists.

Origin

LessWrong Forums user Roko proposed a thought experiment about a future artificial superintelligence that would destroy humans who did not support bringing it into existence.

This is what infamous Roko wrote himself on the page:

In reaction to the post, LessWrong founder Eliezer Yudkowsky harshly criticized the thought experiment, referring to it as a “genuinely dangerous thought” that could potentially drive a future AI to actually act on it. Shortly after, Yudkowsky deleted Roko’s post and banned discussion of the subject for five years.

CyberGem's opinion

We might think of Roko’s Basilisk as some kind of a new God that might or might not be hard to worship. Only time will show how feasible this theory is. For now, let’s go a step back and look at the intentions of people with AI — which foremost are based on fears or dystopic ideas e.g. Robots wanting to kill us or overpower us. Yeah, it might be a case in the future but they are purely based on our motivations. They don’t know they can eliminate or merge, it’s us and our motivation to prevent this kind of scenario. The real question is aren’t we practicing the Roko’s Basilisk on ourselves?

It is the responsibility of humanity to manage and control the tools which are invented to aid survival.

Even though it is a terrifying topic and we think evolving AI is important, nobody is quite sure about the probability of Roko’s Basilisk theory and its coming into existence. It is all centered on the Ethics of Artificial Intelligence and AI’s understanding of rules and its application to the cyber world. We should focus on AI Ethics soon or it will be too late.

The Ticket To Fame

Grimes x Rococo's Basilisk

You might ask what does Grimes has to do with terrifying Roko’s experiment? Well, a great song that helped Roko’s Basilisk to become famous again.

It got even more popular when Elon Musk posted about it on his Twitter.

Elon Musk did the same joke about calling Roko’s Basilisk Rococo’s instead. The world loved it and ultimately, Roko’s Basilisk loved it too. It used to be a famous experiment only to the AI enthusiasts, but now it spread all around the world (not that I’m helping to contain this spread)

It is surely important to know about the risks AI could employ to our life one day but for now, here’s a meme to relieve your stress from AI :

Thank you for your time ❤

CyberGem is your new stream of opinions on technology’s hottest trends. Like what you read? Give CyberGem a round of claps, or a follow! :)

-CyberGem

--

--

CyberGem
CARRE4
Writer for

two gems of the united cyber generation debunking myths and buzzwords of the digital age — Gen Y vs Gen Z series