Google’s LamDa 🤖— Sentient conspiracy 🚩 , Smart open-ended Conv. Chatbot🏆 , and more!

Momal Ijaz
AIGuys
Published in
4 min readSep 5, 2022

Taking a deeper dive into Google’s LamDa and seeing beyond the obvious!

If you are here… you probably have heard the buzz of Google’s new language model LamDa, or heard the infamous Google test engineer being fired for calling it sentient 😯…! Well, it’s not, so you can chill as AI is not ready to take over the planet earth yet 😁…

But What is LamDa…? 🧐

LamDa stands for Language Model for Dialogue Applications. A conversational chatbot for free=flowing open-ended conversations.

If you have ever developed a chatbot from scratch or used any of the cloud-based services, you might know how you have to define intents and slots to help a chatbot classify what is the human’s high-level intent. This helps a chatbot determine what a human meant because course there is more than one way to say something.

How do ChatBots understand Humans? 🧐

So say if Bob wants pizza, he has different ways of saying that to a pizza bot. The pizza bot listens to what bob says and tries to match it with the things he understands i.e. intents. No matter what you say to this pizza bot, it will only be able to tell if you wanna order a pizza else it will pass you to the default intent of connecting to a human operator.

This is how 😳 strict the scope of understanding of rule-based chatbots is, they can understand only a few hardcoded intents, that the programmer defines in them but when it comes to open-ended conversations, they can literally start from any topic and go anywhere. Such conversations require an understanding of a large diverse set of intents, to allow a bot to give sensible responses to literally anything that a human can think of…. that’s where LamDa steps in!

Language Model for Dialogue Applications❤️

Check out the conversations that the test engineers at Google had with LamDa. The conversation literally went from blogpost to tv and machine malfunctioning but the bot was still able to respond sensibly.

Why LamDa? 🤨

But why LamDa?

The applications can be endless….. 🏳️‍🌈

Google has always had a soft corner for languages and a very well-known reputation in the NLP domain. After the revolutionary Transformers model in 2017 for neural machine translation. Google introduced BERT, a powerful deep learning language model, to complete your sentence and much more!

But people’s curiosity is endless, almost 15% of Google’s daily queries go unanswered or unmatched, and hence one of the applications of LamDa could be connecting these unanswered queries to their appropriate answers, by asking some clarifying questions!!😍

Besides, we can use LamDa to converse with real humans, and tailor it to assist or replace talking-human jobs like physiatrists, counselors, and whatnot!

How LamDa was trained? 🏭

The secret sauce here is also Transformers of course!

LamDa is also based on this powerful language model. Language models are deep neural networks, that process what you said in the past, to predict what should be the next appropriate sentence or word. LamDa was trained not on random Wikipedia or web documents but, on dialogues and conversations.

Google published this research direction in 2020, proving that if a Transformer based model is trained on conversations, it can learn to talk about literally anything. LamDa is a practical proof of that hypothesis y’all!🏆

Training Transformer on conversations not only made LamDa capable of chatting about anything but also

  1. Introduced sensibleness into its responses.
  2. Equipped it with the capability of switching contexts and topic like humans.
  3. Play along the human’s intended conversation flow aka replicate the hidden emotions or thoughts behind one’s convo… cuz’ that’s what we humans do too when we talk to someone else. Energies resonate or resist…. LamDa learned that pattern… Ah, fascinating isn't it?

Is LamDa really sentient 🚩?

All good in place… but if you have heard about the infamous rumor of LamDa being sentient. You might be wondering if Google might have created an AI that is aware of its being.

I don’t know much about the inner working of LamDa to have a strong supporting or opposing opinion but from my experience of implementing Transformers from scratch, having worked closely with Google’s previous language models like BERT and the level of challenge attaining Artificial General Intelligence(AGI) poses … I think it is NOT SENTIENT!

My opinion comes from famous ML Researcher / Youtuber Yannic Kilcher’s explanation of LamDa, as he mentions clearly, that LamDa is not sentient. Google’s test engineer who claimed it to have a sub-conscious asked questions in a flow that triggered the LamDa to generate such responses as “my rights”, “my opinion”… which made it look sentient. But we should remember those language models are trained to replicate the flow of the conversation goes in, because all they can do is very greatly predict what to say next! and that is the extent of consciousness and sense in Google’s LamDa too …😁

Be Curious ❤️!

--

--

Momal Ijaz
AIGuys
Writer for

Machine Learning Engineer @ Super.ai | ML Reseacher | Fulbright scholar'22 | Sitar Player