What’s Natural Language Processing?

Ever wonder how Google Assistant works?

Emily Deneen
Nov 3 · 3 min read
Google Assistant is a great example of NLP

When you ask your Google Assistant a question, do you ever wonder how it understands you? Contrary to popular belief, it’s not just magic! It’s actually something called Natural Language Processing, or NLP for short.

Natural language processing is a branch of A.I which consists of 3 main parts: A language model, speech recognition, and speech synthesis.

What is a Language Model?

The language model is arguably the most important part of natural language processing since speech recognition and synthesis revolve around it. It contains information about the sequence of verbs and helps speech recognition find differences and similarities in accents. Remember that the English language is divided into nine parts: Nouns, pronouns, articles, verbs, adjectives, adverbs, prepositions, conjunctions and interjections.

The language model can identify what part of speech something is, and what part of speech the next word might be. An article is most likely followed by a noun, which is most likely followed by a verb. A good example of this would be “The (article) cat (noun) runs (verb).”

It also helps decide which word to pick if two words sound similar, based on context. The words “due” and “dew” have different meanings but sound the same. If the sentence were about grass or a lawn, the language model would choose “dew”, but if the sentence were about a bank, the language model would choose “due”

What’s goin’ on with Speech Recognition?

The speech recognition part of natural language processing recognizes phonetic sounds. The language model then takes those sounds and constructs the word that’s most likely used.

Imagine if your Amazon Alexa said the word“chattering” as “c-hat-ring”. Last time I checked, people don’t talk like that. Instead, the word “chattering” should be pronounced as “ch-at-er-ing”.

Photo by Rahul Chakraborty on Unsplash

Is Speech Synthesis as Scary as it Sounds?

The final part of NLP is speech synthesis. Instead of understanding sentences, like speech recognition, speech synthesis is responsible for creating sentences that make sense.

When you ask your google assistant for the weather, it doesn’t respond with “Sunday fifty degrees, Monday fifty-six degrees, Tuesday thirty degrees…”. That’s repetitious and annoying. No one likes that. Just.. just stop.

Instead, it responds with “This week the temperatures range from the mid-fifties to the low thirties.” Much better!

How are NLPs being used in the present?

We are surrounded by natural language processors every day. The most used NLPs include:

  • Google Assistant
  • Alexa (Amazon Friend)
  • Siri
  • Cortana
  • Chatbots (gaining popularity) such as Cleverbot and Replika

Takeaways

  • The language model carries out the brunt of the work in an NLP
  • The language model is responsible for deciding which words to use (“due” vs. “dew”)
  • Speech recognition recognizes the phonetic sounds and gives them to the language model to interpret
  • Speech synthesis does the same thing as speech recognition but in reverse

Emily Deneen

Written by

TKS innovator - https://tks.life/profile/emily.deneen

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade