Word Embedding explained in one slide

Word embeddings is one of the most powerful concepts of deep learning applied to Natural Language Processing. Any word of a dictionary (the set of words recognized for the specific task) is basically transformed into a numeric vector of a certain number of dimensions. All the rest, classification, semantic analysis, etc. is done from the aforementioned vectors on.

Here is a slide that explains this with a bit of algebra and some user friendly text. Download it and feel free to share.


Before you go

If you enjoyed this post, you will love the newsletter at datascienceathome.com It’s my FREE digest of the best content in Artificial Intelligence, data science, predictive analytics and computer science.