LLM Architectures Explained: NLP Fundamentals (Part 1)
Deep Dive into the architecture & building of real-world applications leveraging NLP Models starting from RNN to the Transformers.
Posts in this Series
- NLP Fundamentals ( This Post )
- Word Embeddings
- RNNs, LSTMs & GRUs
- Encoder-Decoder Architecture
- Attention Mechanism
- Transformers
- BERT
- GPT
- LLama
- Mistral
Table of Contents
· 1. What is Natural Language Processing (NLP)
· 2. Applications of NLP
· 3. NLP Terms
∘ 3.1 Document
∘ 3.2 Corpus (Corpora)
∘ 3.3 Feature
· 4. How NLP works?
· 4.1 Data Pre-processing
∘ 4.1.1 Tokenization
∘ 4.1.2 Stemming
∘ 4.1.3 Lemmatisation
∘ 4.1.4 Normalization
∘ 4.1.5 Part of Speech (POS) Tagging
· 4.2 Feature Extraction
∘ 4.2.1 Bag-of-Words (BoW)
∘ 4.2.2 Term Frequency-Inverse Document Frequency (TF-IDF)
∘ 4.2.3 N-grams
∘ 4.2.4 Word Embeddings
∘ 4.2.5 Contextual Word Embeddings
· 4.3 Modeling
∘ 4.3.1 Named Entity Recognition (NER)
∘ 4.3.2 Language Model
∘ 4.3.3 Traditional ML NLP Techniques:
∘ 4.3.4 Deep Learning NLP Techniques
∘ 4.3.5 Attention Mechanism
∘ 4.3.6 Sequence-to-Sequence (Seq2Seq) Model
∘ 4.3.7 Transfer Learning
∘ 4.3.8 Fine-Tuning
∘ 4.3.9 Zero-Shot Learning
∘ 4.3.10 Few-Shot Learning
· 5. Comparative Analysis of NLP Models and…