Word Embedding explained in one slide

Francesco Gadaleta
Oct 30, 2016 · 1 min read

Word embeddings is one of the most powerful concepts of deep learning applied to Natural Language Processing. Any word of a dictionary (the set of words recognized for the specific task) is basically transformed into a numeric vector of a certain number of dimensions. All the rest, classification, semantic analysis, etc. is done from the aforementioned vectors on.

Here is a slide that explains this with a bit of algebra and some user friendly text. Download it and feel free to share.


Before you go

If you enjoyed this post, you will love the newsletter at datascienceathome.com It’s my FREE digest of the best content in Artificial Intelligence, data science, predictive analytics and computer science.

Stories about technology, machine learning and algorithms

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade
A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store