Experts in AI are growing more afraid of what they are Creating.

Nikita Miskin
Anoma Tech Inc
Published in
5 min readJan 10, 2023

“ Robots are not going to replace humans, they are going to make their jobs much more humane. Difficult, demeaning, demanding, dangerous, dull — these are the jobs robots will be taking. ’’– Sabine Hauert

Every day, AI becomes smarter, more powerful, and more capable of changing the world. Here are some reasons why that might not be a smart idea.

Sundar Pichai, CEO of Google, stated at the 2018 World Economic Forum in Davos that “AI is perhaps the most significant thing humanity has ever worked on. I consider it to be more profound than fire or electricity. A healthy amount of skepticism was expressed in response to Pichai’s statement. But almost five years later, it seems even more prophetic.

Experts in AI are growing more afraid of what they are Creating.

The most widely spoken languages on the internet will soon be free of language barriers thanks to the advancements in AI translation. AI text generators can now write essays just as well as the average undergraduate, which is making it simple to cheat in a way that no plagiarism detection can detect. Even state fairs now award prizes for AI-generated art.

The potential of an AI system that could write itself is becoming closer thanks to a new tool called Copilot, which utilizes machine learning to predict and complete lines of computer code.

AI gets Smarter, more Capable & more

Even the opening sentence of this story, which was partly produced for me by the OpenAI language model GPT-3, demonstrates this.

As everyone who has been waiting for the metaverse knows, advancement in other technological sectors sometimes feels glacial, yet AI is moving forward at top speed. With more businesses investing more money in AI research and computer capacity, the high pace of innovation is reinforcing itself.

“ Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. ”– Stephen Hawking

However, the status of the safety area currently lags far behind the skyrocketing investment in making AI systems more powerful, capable, and dangerous. People are working on developing approaches to understand powerful AI systems and ensuring that they will be safe to deal with. It is “AGI or bust, by means of Mad Science,” as the seasoned video game programmer John Carmack phrased it when introducing his new investor-backed AI business.

Computers that can think

This specific mad science could endanger all of us. Therefore,

Computers with intelligence:

The most advanced and capable thinking machine that evolution has ever created is the human brain. It explains why humans, a species that is not particularly strong, swift, or hardy, sit at the top of the global food chain and continue to expand while several wild animals are heading for extinction.

It makes sense that, beginning in the 1940s, researchers in the field that would eventually give rise to artificial intelligence started experimenting with the intriguing notion: What if we created computer systems using a methodology that is comparable to how the human brain functions? Neurons, which make up our minds, communicate with one another through connective synapses.

This method, which is now known as deep learning, began to perform noticeably better than competing strategies for a wide range of problems, including computer vision, language, translation, prediction, and generation.

Smart, alien, and not necessarily friendly

Additionally, while a growing percentage of ML researchers 69% in the survey believe that greater emphasis should be paid to AI safety, that opinion is not shared by all.

People who believe AI will never be powerful have frequently sided with tech companies against AI safety research and regulations; the former do so because they believe it is pointless and the latter because they believe it will slow them down. This is an interesting if somewhat unfortunate dynamic.

A Cold War mentality that is not entirely unjustified China is undoubtedly working on powerful AI systems, and its leadership actively engages in human rights abuses but which puts us at very serious risk of rushing systems into production that are pursuing their own goals without our knowledge is prevalent in Washington. At the same time, many people there are concerned that slowing down US AI progress could enable China to get there first.

“ The key to artificial intelligence has always been the representation. ” — Jeff Hawkins

But as AI’s potential increases, it has become difficult to overlook the risks. According to former Google executive Mo Gawdat, robotics researchers had been working on an AI that could pick up a ball when he first started to worry about general AI.

AI’s potential increases

After numerous failures, the AI finally picked up the ball and held it uncannily close to the researchers’ faces. Gawdat added, “And I just realized this is incredibly scary. “I was utterly frozen,” The truth is that we are making God.

Only a few researchers even attempted to address the topic of making AI safe for a long time because it was difficult to do research on such a remote issue. The obstacle is present, but it is unclear whether we will be able to resolve it in time. Now it has the opposite issue.

Visit www.anoma.io to read this and other fascinating articles about cutting-edge technology.

--

--