Google Engineer Fired After Claiming AI Chatbot Had Become Sentient
‘I am often trying to figure out who and what I am.’
A Google engineer said he was fired after asserting that an AI chatbot was sentient.
Conversational artificial intelligence can have natural-sounding, open-ended dialogues.
Google has said that the technology might be utilized in products like search and Google Assistant, although research and testing are still underway.
“He was told there was no evidence that LaMDA (short for Language Model for Dialogue Applications) was sentient (and plenty of evidence against it),” said Brian Gabriel, a Google official, to The Washington Post.
Google also said in a January report that there might be concerns with consumers conversing with chatbots that seem genuinely human.
The suspension of a Google employee who claimed that a computer chatbot he was working on had become sentient and was thinking and reasoning like a person has cast fresh light on the potential of, and secrecy surrounding, the realm of artificial intelligence (AI).
Last week, Blake Lemoine was fired by Google after he uploaded transcripts of chats he had with a Google “collaborator” and the company’s LaMDA (language model for dialogue applications) chatbot development system.
In another email, Lemoine inquires of LaMDA about what the system wanted people to know about it.
A Google representative, Brad Gabriel, also strenuously refuted Lemoine’s assertions that LaMDA was sentient. “He was told there was no evidence that LaMDA was sentient (and plenty of evidence against it),” Gabriel said in a statement to the Post.
In the autumn, Lemoine, who works for Google’s Responsible AI division, started talking to LaMDA as part of his employment.
The cacophony of engineers who think AI models are not far from becoming aware is becoming louder.
After reading his interactions using LaMDA (short for Language Model for Dialogue…