NLP Landscape 1950–2022

Ashish
2 min readSep 26, 2022

--

In 1940 after the world war II there is immense need of translation between languages and people thought of develop a machine for this automating this task. In 1957 some researchers find a lot of issue in developing such a machine the major problem was different grammar and rules in different language and a machine cannot understand it easily.

At that time researchers uses heuristic approaches to solve different problem related to NLP. The main techniques that were used are regular expression, wordnet and open minded common sense but the problem with these approaches is that we cannot use them to solve open ended problem because a human will not able to define rule for these problem. To solve this in 1990 Machine Learning comes in picture. The introduction of Machine Learning revolutionize the NLP we were able to solve open ended problems and we didn’t need humans for making every rule. The major algorithms that are used are Naive bayes, Logistic Regression, SVM, LDA, hidden Markov models.

Even though ML solves a lot of problems in NLP it still lacks a lot of thing like the order of words in a sentence is means nothing for a ML algorithm even if we jumbled the order of words in the sentence it will give the same result like if i write “I live in a house” or “in a house live I” the ML algorithm treats both as same and this create a lot of problem for some cases.

To solve this deep learning was introduced in 2010 and because of this the development of software like speech recognition, chatbots was possible and deep learning now used in a lot of ways now in the industry

--

--