Imagine if somehow frogs had invented people …

Fabian Lang
8 min readMay 25, 2023

--

Recently, I listened to The Robot Brains Podcast S3 E9: “Geoff Hinton, the ‘Godfather of AI’, quits Google to warn of AI risks”. In the conversation many interesting topics are raised, some of which I want to highlight in this article.

Read in German 阅读中文版

A person holding a green frog
A small frog is gently held in the palm of a hand. | Photo: Olivia Bliss

Geoff Hinton, often referred to as the “Godfather of AI,” is a renowned computer scientist and one of the pioneers of deep learning. He recently left Google in order to express his concerns about the potential dangers of artificial intelligence more freely. Below, I quote some of the statements Hinton makes in the interview (including timestamps) and elaborate further on them.

Human vs. machine intelligence

In the first few minutes of the interview, when asked about his abrupt change of perspective, Hinton responds as follows:

“For 50 years, I thought I was investigating how the brain might learn by making models on digital computers using artificial neural networks and trying to figure out how to make those learn. And I strongly believe that to make the digital models work better, we had to make them more like the brain. And until very recently, I believed that.

And then, a few months ago, I suddenly did a flip. I suddenly decided, actually, backpropagation running on digital computers might be a much better learning algorithm than anything the brain’s got.” (3:26)

The backpropagation algorithm is a fundamental technique in training artificial neural networks. It allows for the adjustment of neural network weights (model parameters) based on the difference between the predicted and actual outputs, enabling the network to learn and improve its performance over time.

This algorithm paired with the Transformer architecture, developed by Vaswani et al. in 2017, revolutionized AI systems, especially in the realm of natural language processing. Its attention mechanism allows the model to focus on relevant parts of the input data, making it highly efficient and effective in understanding the context and nuances of language, and it forms the backbone of state-of-the-art models like BERT, GPT-3, and GPT-4.

To understand the significance of the Transformer even more, I recommend you to watch ‘The A.I. Dilemma‘ by Tristan Harris and Aza Raskin from the Center for Humane Technology. In their talk, they provide a comprehensive explanation of the advancements since 2017 and the impact brought about by this breakthrough architecture.

Even Hinton expresses that he was slightly shocked when Google’s language model PaLM could explain jokes and demonstrate common-sense reasoning skills using natural language.

PaLM explains an original joke with two-shot prompts. Source

Keep in mind that OpenAI’s GPT 3 had already been available in the API for almost 2 years when Google introduced PaLM. In an interview with Sam Altman in January of this year, the co-founder of OpenAI even acknowledged that the model for ChatGPT had been sitting in the API for approximately 10 months before they developed the actual interface. To his surprise, no one had created anything comparable until their own release.

Hinton proceeds to argue that through training large language models with backpropagation, they have the capacity to learn a thousand times more common sense facts than we are capable of. Consequently, the learning algorithm of a digital system can exhibit remarkable superiority over human cognitive abilities in numerous domains.

The parallelized nature of hardware systems in particular leads him to belief that these models are much better at sharing and acquiring knowledge compared to us humans.

“You can have many copies of the same model running on different hardware, and when one copy learns something it can communicate that to all the other copies, by communicating the weight changes with a bandwith of 3 years of bits. Whereas when you learn something to communicate it to me, I need to try and change my weights so that I would say the same thing as you, and the bandwith of sentences is only hundreds of bits per sentence.“ (5:15)

This also leads us to our next topic…

Existential threat

If you’ve been wondering up until this point what all of this has to do with frogs, here’s a point that Hinton raises halfway through the interview:

“It may be that it’s historically inevitable, that digital intelligence is better than biological intelligence and it’s the next stage of evolution. I hope not, but that’s possible. We should certainly do everything we can to keep control.

Sometimes when I’m gloomy I think, imagine if somehow frogs had invented people, and frogs needed to keep control of people. But there’s a rather a big gap in intelligence. I don’t think it will work out well for the frogs…“ (33:35)

This also strongly reminds me of a recent keynote given by Yuval Noah Harari on ‘AI and the future of humanity,’ where he discusses the possibility of inorganic intelligence arising within organic life.

Hinton brings up the following point earlier in the interview:

“Now one thing we have on our side is that we are the result of evolution. So we come with strong goals about not damaging our bodies and getting enough to eat, and making lots of copies of ourselves. And it’s very hard to turn these goals off. These things don’t come with strong built-in goals. We made them and we get to put the goals in. So that suggests that we might be able to keep them working for our benefit, but there’s lots of possible ways that might go wrong, so one possible way is bad actors. Defense departments are going to build robot soldiers and the robot soldiers are not going to have the Asimov principles. Their first principle is not going to be whatever you do don’t harm people, it’s going to be just the opposite of that, so that’s the bad actor scenario, …” (7:08)

Another bad actor scenario that Hinton mentions is the use of AI to manipulate people. Computer algorithms have learned to beat world-class chess and Go players, and for a long time, we thought that AI systems were only capable of defeating human experts in these complex yet constrained games. However, as technology advances, AI systems have evolved to encompass a wide range of capabilities beyond game-playing. They can handle enormous amounts of data, identify patterns, and make highly accurate predictions… and now, they have ventured into another domain that was once thought to be exclusive to humans: the realm of natural language understanding and generation.

“Now, you can’t make it safe just by not allowing it to press buttons or pull levers. […] A chatbot will have learned how to manipulate people. They would have read everything Machiavelli ever wrote and all the novels in which people manipulate other people, and it will be a master manipulator if it wants to be. And it turns out, you don’t need to be able to press buttons and pull levers. You can, for example, invade a building in Washington just by manipulating people. You can manipulate them into thinking that the only way to save democracy is to invade this building. So, a kind of air gap that doesn’t allow an AI to actually do anything other than talk to people, is insufficient. If it can talk to people, it can manipulate them. And if it can manipulate people, it can get them to do what it wants.” (38:45)

Bias

Despite all the bad actor fantasies, there are also reasons for optimism. AI systems can be remarkably biased as they mirror the biases present in their training data. However, an intriguing aspect of this problem is that it may make AI systems easier to correct than their human counterparts:

“AI is incredibly biased. If it’s trained on incredibly biased data, it just picks up bias from the data. That doesn’t worry me as much as it does some people because I think people are very biased. Actually, understanding the bias in an AI system is easier than understanding the bias in a person. Because you can just freeze the AI system and do experiments on it. You can’t do that with a person. If you try and freeze the person to do experiments, they realize what you’re up to and they change what they say. So, I think actually bias is easier to fix in an AI system than it is in a person.” (17:42)

Already today there are libraries such as 🤗Evaluate that can be used to evaluate language model bias.

Intergovernmental A.I. organizations (and CEOs?)

In the course of their conversation they also talk about the possibility of AI led organizations and even companies. Imagine replacing intergovernmental organizations like the UN with a highly intelligent AI mediator that doesn’t have personal goals. This AI entity could analyze the global landscape, propose unbiased solutions, and guide humanity like a benevolent parent, urging cooperation for collective benefit. Ideally, unlike human-led institutions, this AI system wouldn’t have an agenda of its own, making it an impartial arbiter of global issues.

“We’d prefer to have a scenario where we create something much more intelligent than us and it kind of replaces the UN. You have this really intelligent mediator, that doesn’t have goals of its own. Everybody knows that it doesn’t have its own agenda. It can can look at what people are up to and it can say: “Don’t be silly… everyone is going to lose if you do this. You do this and you do that.” And we would sort of believe it like children would believe our benelovent parent.“ (36:04)

This type of societal order is also referred to as algorithmic governance or algocracy. Algocracy is the idea of using artificial intelligence systems, devoid of personal biases and agendas, for making decisions or guiding social systems based on algorithms and data analysis. This would mean embracing a paradigm where decisions are driven by objective analysis rather than subjective human biases. Such a shift could result in the establishment of a fairer and more equitable society, with AI playing a pivotal role in governance and decision-making processes.

How do you feel about letting an artificial intelligence make important decisions about the running of the country? (N = 2,576) | Source: IE University’s Center for the Governance of Change

According to a 2019 poll conducted by IE University’s Center for the Governance of Change in Spain, it was found that “one in every four Europeans would allow an artificial intelligence system to make important decisions about the running of their country”. As AI systems continue to advance, there is a growing possibility that the acceptance of algorithmic governance will also increase.

I hope the topics I highlighted above got you thinking about AI and how it might shape our society in the future (for the better or the worse; while we’re at it, let’s all hope for the better🤞). As I wrap up, let me underscore this: I think that AI holds tremendous potential to advance human societies through myriad applications. At the same time, we must be mindful of the potential risks. But most importantly, we need to collectively participate in this discussion. Your comments and insights on this topic are most welcome.

You can reach me via Twitter for any corrections or feedback.

--

--

Fabian Lang

Data, tech & #NLProc enthusiast | Market & Audience Insights Manager / Data Scientist at Deutsche Welle (DW) | https://twitter.com/langfab