BitPost : Are all word embedding same in NLP?

Distributional hypothesis says “You shall know a word by the company it keeps” also originally said by Firth, J. R. 1957:11.

Akansha Jain
Learn with Akansha
1 min readNov 30, 2019

--

Word embeddings based on the above distributional hypothesis principal are GloVe and Word2Vec. Sadly, this hypothesis fails to work for ambiguous words, words that are misspelled, and are rare to appear in your training data. For example, like “an apple falling on Newton’s head” and “an apple’s product falling from your hand” has different repercussions so shall the word embeddings capturing those sentence’s representations have. 😁 Thanks to recent and rapid research in #NLP we have ELMo, ULMfit, Flair and BERT based embeddings that solve the above mentioned shortcomings. Go ahead and try them all!

Follow #LearnWithAkansha on LinkedIn for more knowledgeable updates. You can follow me on Twitter, my work on Github, and connect professionally with me on LinkedIn.

--

--

Akansha Jain
Learn with Akansha

Senior Data Scientist, Builder.ai | Master’s in Data Analytics at Indian Institute of Information Technology & Management, Kerala.