想象一下,如果青蛙以某种方式发明了人类 …

Fabian Lang
12 min readMay 25, 2023

--

最近,我听了《The Robot Brains Podcast》:AI教父Geoff Hinton辞去Google警告AI风险。在这次采访中提出了许多有趣的话题,其中我想在这里强调一些。

Read in EnglishRead in German

一个人将一只绿色的青蛙握在手心中。
一之小青蛙安静地躺在一个人的手掌中。| 照片: Olivia Bliss

Geoff Hinton,常被誉为“AI教父”,是一位著名的计算机科学家和深度学习的先驱之一。他最近离开了Google,以更自由地表达他对人工智能潜在危险的担忧。以下是Hinton在采访中的一些观点,并对其进行进一步阐述。

人类智能与机器智能

在采访的最初几分钟里,当面对有关他突然观点改变的问题时,Hinton的回答如下:

“For 50 years, I thought I was investigating how the brain might learn by making models on digital computers using artificial neural networks and trying to figure out how to make those learn. And I strongly believe that to make the digital models work better, we had to make them more like the brain. And until very recently, I believed that.

And then, a few months ago, I suddenly did a flip. I suddenly decided, actually, backpropagation running on digital computers might be a much better learning algorithm than anything the brain’s got.” (3:26)

反向传播算法是训练人工神经网络的基本技术之一。它允许根据预测输出与实际输出之间的差异来调整神经网络的权重,使网络能够随着时间的推移学习并提高其性能。

这个算法与Transformer架构相结合,由Vaswani等人在2017年开发,彻底改变了AI系统,特别是在自然语言处理领域。它的注意力机制使模型能够专注于输入数据的相关部分,使其在理解语言的上下文和细微差别方面高效而有效,它构成了BERT、GPT-3和GPT-4等最先进模型的核心。

为了获得更多信息,我建议你观看Tristan Harris和Aza Raskin在Center for Humane Technology的演讲《人工智能困境(The A.I. Dilemma)》。在他们的演讲中,他们详细解释了自2017年以来的进展以及这一突破性架构所带来的影响。

甚至Hinton本人也表达了对谷歌的语言模型PaLM能够用自然语言解释笑话并展示常识推理能力的轻微震惊

使用PaLM解释一个原创笑话。来源

请记住,谷歌推出PaLM时,OpenAI的GPT-3已经在API中可用了两年时间。在今年1月份与Sam Altman的一次采访中,OpenAI的联合创始人甚至承认,在他们开发实际聊天界面之前,ChatGPT的模型已经在API中放置了大约10个月的时间。令他惊讶的是,直到OpenAI自己发布之前,没有人创建出类似的东西。

Hinton继续争论称,使用反向传播来训练大型语言模型可以学习比我们能够掌握的常识事实多一千倍,因此数字系统的学习算法在某些领域可以显著优于人类的认知能力。

特别是硬件系统的并行化性质使他相信,与我们人类相比,这些模型在分享和获取知识方面要好得多。

“You can have many copies of the same model running on different hardware, and when one copy learns something it can communicate that to all the other copies, by communicating the weight changes with a bandwith of 3 years of bits. Whereas when you learn something to communicate it to me, I need to try and change my weights so that I would say the same thing as you, and the bandwith of sentences is only hundreds of bits per sentence.“ (5:15)

这也将我们引向下一个话题…

生存威胁

如果直到现在你一直在想这一切与青蛙有什么关系,这是Hinton对此的看法:

“It may be that it’s historically inevitable, that digital intelligence is better than biological intelligence and it’s the next stage of evolution. I hope not, but that’s possible. We should certainly do everything we can to keep control.

Sometimes when I’m gloomy I think, imagine if somehow frogs had invented people, and frogs needed to keep control of people. But there’s a rather a big gap in intelligence. I don’t think it will work out well for the frogs…“ (33:35)

这一切也让我想起了Yuval Noah Harari在Frontiers Forum的主题演讲中提到的“人工智能与人类未来(AI and the future of humanity)”,其中他谈论了有机生命和无机智能的出现。

在采访的早期,Hinton提出了以下观点:

“Now one thing we have on our side is that we are the result of evolution. So we come with strong goals about not damaging our bodies and getting enough to eat, and making lots of copies of ourselves. And it’s very hard to turn these goals off. These things don’t come with strong built-in goals. We made them and we get to put the goals in. So that suggests that we might be able to keep them working for our benefit, but there’s lots of possible ways that might go wrong, so one possible way is bad actors. Defense departments are going to build robot soldiers and the robot soldiers are not going to have the Asimov principles. Their first principle is not going to be whatever you do don’t harm people, it’s going to be just the opposite of that, so that’s the bad actor scenario, …” (7:08)

Hinton提到的恶意行为之一是利用人工智能操纵人们。计算机算法已经学会战胜世界级的国际象棋和围棋选手,很长一段时间以来,我们认为人工智能系统只能在这些复杂但受限的游戏中击败人类专家。然而,随着技术的进步,人工智能系统已经发展出了超越游戏的广泛能力。它们现在可以处理大量的任何数据,识别模式,并以惊人的准确性进行预测… 现在,它们已经进入了曾被认为是人类专属的领域:自然语言理解和生成。

“Now, you can’t make it safe just by not allowing it to press buttons or pull levers. […] A chatbot will have learned how to manipulate people. They would have read everything Machiavelli ever wrote and all the novels in which people manipulate other people, and it will be a master manipulator if it wants to be. And it turns out, you don’t need to be able to press buttons and pull levers. You can, for example, invade a building in Washington just by manipulating people. You can manipulate them into thinking that the only way to save democracy is to invade this building. So, a kind of air gap that doesn’t allow an AI to actually do anything other than talk to people, is insufficient. If it can talk to people, it can manipulate them. And if it can manipulate people, it can get them to do what it wants.” (38:45)

偏见

尽管存在着各种关于恶意行为的幻想,但对于人工智能的出现也有理由保持乐观。虽然人工智能系统可能会具有显著的偏见,反映了它们所训练的数据中存在的偏见,但这个问题也有一个有趣的方面,可能使得人工智能系统比人类更容易进行修正。

“AI is incredibly biased. If it’s trained on incredibly biased data, it just picks up bias from the data. That doesn’t worry me as much as it does some people because I think people are very biased. Actually, understanding the bias in an AI system is easier than understanding the bias in a person. Because you can just freeze the AI system and do experiments on it. You can’t do that with a person. If you try and freeze the person to do experiments, they realize what you’re up to and they change what they say. So, I think actually bias is easier to fix in an AI system than it is in a person.” (17:42)

如今已经有一些库,例如🤗Evaluate,可以用来评估语言模型的偏见。

政府间人工智能组织(或首席执行官?)

在他们的对话中,他们还谈到了由人工智能领导的组织甚至公司的可能性。想象一下,用一种高度智能的人工智能调解者取代像联合国这样的政府间组织,它没有个人目标。这个人工智能实体可以分析全球形势,提出公正无私的解决方案,并像一位仁慈的父母一样引导人类,敦促合作以谋求集体利益。承诺是,与由人类领导的机构不同,这个人工智能系统不会有自己的议程,使其成为全球问题的公正仲裁者。

“We’d prefer to have a scenario where we create something much more intelligent than us and it kind of replaces the UN. You have this really intelligent mediator, that doesn’t have goals of its own. Everybody knows that it doesn’t have its own agenda. It can can look at what people are up to and it can say: “Don’t be silly… everyone is going to lose if you do this. You do this and you do that.” And we would sort of believe it like children would believe our benelovent parent.“ (36:04)

这种社会秩序也被称为算法治理或Algocracy(算法 + 民主)。算法治理是利用人工智能系统,摆脱个人偏见和议程,基于算法和数据分析做出决策或引导社会系统的理念。这意味着接受一种范式转变,决策由客观分析而不是主观人类偏见来引导。这样的转变可能会促使建立一个更公正、更公平的社会,其中人工智能在治理和决策过程中发挥关键作用。

对于让人工智能系统对国家的重要决策负责,您有何感受?(N = 2576) | 来源: IE University’s Center for the Governance of Change

根据西班牙IE大学变革治理中心2019年进行的一项调查发现,“四分之一的欧洲人愿意让人工智能系统对其国家的重要决策负责。”随着人工智能系统的不断发展,算法治理的接受度也有可能增加。

我希望我上面强调的话题让你思考起人工智能以及它如何在未来塑造我们的社会(无论是好是坏;在这个过程中,让我们都期待着好的结果🤞)。在我结束之际,让我强调一下:我认为人工智能在通过众多应用推动人类社会发展方面具有巨大的潜力。与此同时,我们必须谨记潜在的风险。但最重要的是,我们需要共同参与这个讨论。对于这个话题,你的评论和见解都是非常受欢迎的。

你可以通过Twitter与我联系,以便提供纠正或反馈意见。

--

--

Fabian Lang

Data, tech & #NLProc enthusiast | Market & Audience Insights Manager / Data Scientist at Deutsche Welle (DW) | https://twitter.com/langfab