Sentiment Analysis 101

Kadam Parikh
Simform Engineering
10 min readOct 27, 2023

Unlocking the Power of Sentiment Analysis: Techniques, Tools, and Insights.

Sentiment analysis is the task of classifying the sentiment (+ve/-ve/neutral) of a human from some textual content, typically a sentence. This involves determining how a person felt about a particular topic when they wrote about it.

However, this definition of sentiment analysis is quite limited. One must also be able to identify sentiments from a person’s voice (audio data), face capture (graphical data), and brain signals (signal data). In essence, sentiment analysis is not confined solely to text; it can be performed with various types of data.

Disclaimer: The current article focuses solely on identifying sentiment from text data.

How to perform Sentiment Analysis?

One word — NLP (Natural Language Processing). Yes, we can get this task done using NLP. But how exactly?

You might suggest training a Machine Learning or Deep Learning model on the relevant data. However, this raises further questions: What type of models are suitable for this task? What kind of data is needed? Would a Random Forest model be appropriate? And how do you effectively feed text data into a Random Forest model? These questions can be readily addressed when we approach them thoughtfully.

The key to developing ML solutions lies in our ability to think how to achieve goals in a non-ML way.

Approach 1: Lexicon-based methods

Assume you had to develop a sentiment analysis model from scratch. What would the basic system look like? The very basic one would be having a dictionary containing words with assigned sentiment scores. For example:

sentiment_dict = {
"good": 1,
"bad": -1,
"okay": 0
}

score = 0
for word in sentence.split(" "):
score += sentiment_dict.get(word, 0)

if score > 0:
print("Positive")
elif score == 0:
print("Neutral")
else:
print("Negative")

This would work, right? We could add more words to the sentiment_dict to improve the system.

Question: How would you convert this non-ML task to an ML one?

Is there a way to automate the dictionary building process? Assume you have a huge amount of labeled data. You can now identify important positive, negative, and neutral words by applying stopword removal combined with a custom word frequency-based logic. Sounds like NLP, doesn’t it?

It’s not always about achieving the goal using AI. It’s about utilising AI to ease the process and get accurate results.

Here, building the dictionary could be referred to as model training. Looks like somebody has already done this!

SentiWordNet

SentiWordNet is a lexicon-based approach to sentiment analysis that assigns sentiment scores to synsets in WordNet. WordNet is a lexical database of English words grouped into sets of cognitive synonyms called synsets. SentiWordNet contains sentiment scores (positivity, negativity, and objectivity) for each synset.

The algorithm works as follows:

  • Tokenization: The input text is tokenized into individual words.
  • Sentiment Score Calculation: For each word in the text, SentiWordNet retrieves its associated synsets and their sentiment scores (positive, negative, and objective). The sentiment score of each word is calculated as the difference between the positive score and the negative score.
  • Aggregation: The sentiment scores of all words in the text are summed up to obtain the overall sentiment score of the text.
import nltk
from nltk.corpus import sentiwordnet as swn

nltk.download('sentiwordnet')
nltk.download('wordnet')

def get_sentiment_score(sentence):
sentiment_score = 0.0
tokens = nltk.word_tokenize(sentence)
for token in tokens:
synsets = swn.senti_synsets(token)
for synset in synsets:
sentiment_score += synset.pos_score() - synset.neg_score()
return sentiment_score

sentence = "I love this product. It's amazing!"
sentiment_score = get_sentiment_score(sentence)
print("Sentiment Score:", sentiment_score)

Let’s think of the limitations of this approach now.

  1. “The cycle is not that bad” — ‘not’ and ‘bad’ would lead to a negative score, while in reality, the sentiment here should be positive.

Limitations: Lexicon-based approaches like SentiWordNet may struggle with handling negations, sarcasm, and context-specific sentiments, as they rely solely on the sentiment scores of individual words.

To tackle the above limitation, you can build the dictionary based on n-grams, specifically, bi-grams or tri-grams. So, instead of assigning a sentiment score to an individual word, you assign a sentiment score to a pair of words. For example, <The, cycle>, <cycle, is>, <is, not>, <not, that>, <that, bad>. This allows the algorithm to predict sentiment score based on the context of a certain length.

Expert Insight: You can further improvise the above logic by doing some POS-tagging. Synsets have separate scores assigned to a word depending on if it is a noun or a verb. Consider doing POS-tagging and then using appropriate scores.

Approach 2: Machine Learning-Based Sentiment Analysis

As lexicon-based methods rely on predefined sentiment dictionaries to determine the sentiment of words or phrases in a text, they have certain limitations. Intuitively, one cannot be certain if the sentiment is positive or negative by looking at a pair of words. Which means the score cannot be exactly 1 or -1. There’s always a little amount of uncertainty in such tasks, and hence, the idea of probability jumps in.

For example, we cannot assign a sentiment score of 1 to the word “Terrific”. Terrific has two meanings:

  1. of great size, amount, or intensity
  2. causing terror

In short, every word should be assigned two probability values — one for positive and the other for negative — instead of a single sentiment score. And we already know that the task of learning probabilities is called Machine Learning.

Using Naive Bayes for sentiment analysis

Naive Bayes is the simplest of all ML algorithms to learn/infer probabilities.

Naive Bayes is a probabilistic classifier that relies on Bayes’ theorem. In the context of sentiment analysis, here’s a simplified explanation:

Imagine you have a collection of movie reviews, and you want to classify them as either positive or negative.

  1. Data Preparation: First, you preprocess the text data, tokenizing it into words (n-grams) and creating a vocabulary.
  2. Training: You feed the algorithm with labeled data — movie reviews labeled as either positive or negative. Naive Bayes learns from these reviews and calculates probabilities.
  3. Probabilities: For each word in the vocabulary, Naive Bayes calculates two probabilities:
    - P(Word | Positive): The probability that a given word appears in positive reviews.
    - P(Word | Negative): The probability that a given word appears in negative reviews.
  4. Classification: When you have a new, unlabeled review, Naive Bayes calculates the probability that it belongs to the positive class and the probability that it belongs to the negative class.

Example:

Let’s say you have the sentence “I loved the movie.”

  1. Naive Bayes calculates P(Positive | “I”), P(Positive | “loved”), P(Positive | “the”), P(Positive | “movie”).
  2. It also calculates P(Negative | “I”), P(Negative | “loved”), P(Negative | “the”), P(Negative | “movie”).
  3. Then, it multiplies these probabilities together for both positive and negative classes.
  4. Whichever class (positive or negative) has the higher result is the classification.

In our example, the word “loved” is strongly associated with positive reviews. Therefore, the positive probability would likely be higher, classifying the sentence as positive.

Summary

While lexicon-based sentiment analysis has its merits, Machine Learning based approaches like Naive Bayes can adapt to diverse linguistic patterns, handle context, and provide more accurate sentiment classifications for a wide range of text data.

Example Code using TextBlob

We will use the popular TextBlob Python library here. It provides a pretrained Naive Bayes model for sentiment analysis.

from textblob import TextBlob

sentence = "I am feeling great!"
blob = TextBlob(sentence)
sentiment_score = blob.sentiment.polarity
print("Sentiment Score:", sentiment_score)

Output: 1

Limitations

The model available in the TextBlob library is trained on 1-gram. Hence, it might not be able to capture enough context information at times. However, training our own n-grams with TextBlob is also possible.

Expert insights

If you closely look at the code, you might see the word “polarity”. Textblob works purely on the basis of polarity. It also provides another metric that might be useful, called “Subjectivity”.

Polarity: A score based on probability in the range of -1 to +1. -1 indicates negative sentiment, 0 means neutral, and +1 implies positive.

Subjectivity: Subjectivity quantifies the amount of personal opinion and factual information contained in the text. The higher subjectivity means that the text contains personal opinions rather than factual information. TextBlob calculates this based on some hard-coded patterns and rules in the training dataset.

Example: “I like iPhone.” is a subjective statement. Though the sentiment (polarity) is positive, it merely depicts my personal choice. Subjective expressions come in many forms, e.g., opinions, allegations, desires, beliefs, suspicions, and speculations.

Approach 3: Deep Learning-Based Sentiment Analysis

This cutting-edge approach to sentiment analysis leverages the power of neural networks to uncover the emotional undercurrents in text, offering insights and understanding far beyond traditional methods.

At its core, Deep Learning based sentiment analysis involves training a neural network to recognize and understand sentiments in text data. Here’s a simplified breakdown of how it works:

  1. Text Preprocessing: Just like any other NLP task, the text data is preprocessed. This includes tokenization (breaking text into words or subword units), converting words to numerical representations (word embeddings), and handling any other necessary transformations.
  2. Architecture: Deep Learning models for sentiment analysis often employ recurrent neural networks (RNNs), Long Short-Term Memory networks (LSTMs), or, more recently, Transformer-based models (like BERT or GPT). These models can capture the sequential dependencies in text and learn intricate patterns.
  3. Training: The neural network is trained on a labeled dataset of text samples with corresponding sentiment labels (positive, negative, neutral). During training, the model learns to predict the sentiment label of a given text based on its input features.
  4. Word Embeddings: Word embeddings are a crucial component. They convert words into dense numerical vectors, capturing semantic relationships between words. This allows the model to understand the context in which words are used.

Sample training code

import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
import numpy as np

sentences = ["I love this movie!", "This book is terrible!"]
labels = [1, 0] # 1 for positive sentiment, 0 for negative sentiment

tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
sequences = tokenizer.texts_to_sequences(sentences)

vocab_size = len(tokenizer.word_index) + 1
max_len = max([len(sequence) for sequence in sequences])
padded_sequences = pad_sequences(sequences, maxlen=max_len)
labels = np.array(labels)

model = Sequential()
model.add(Embedding(vocab_size, 16, input_length=max_len))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(padded_sequences, labels, epochs=10, batch_size=1)

Benefits of Deep Learning-Based Sentiment Analysis

Contextual Understanding: Deep Learning models excel at capturing the context in which words are used. This is crucial for understanding sentiment, as the same word can have different meanings in different contexts.

Example: “She’s a killer.”

In this sentence, the word “killer” is used. Depending on the context, this word can have contrasting sentiments:

  1. Negative Sentiment Context:
    If the context of the sentence is related to a crime or a dangerous person, the sentiment is negative. For instance, “She’s a killer who has committed multiple crimes.” Here, “killer” refers to someone who has taken lives unlawfully, and the sentiment is strongly negative.
  2. Positive Sentiment Context:
    However, in a different context, “killer” can be used informally and positively. For example, “She’s a killer on the dance floor.” In this case, “killer” is a slang term used to compliment someone’s exceptional dancing skills. Here, the sentiment is positive, expressing admiration for the person’s talent. This example demonstrates how the same word, “killer,” can carry opposite sentiments depending on the surrounding context.

Nuanced Analysis: These models can discern nuanced sentiments, making them suitable for tasks that require fine-grained sentiment analysis. For example, they can distinguish between “happy” and “ecstatic.”

Handling Long Texts: Deep Learning models can handle longer text sequences, which is often challenging for traditional methods.

Transfer Learning: Pre-trained models like BERT and GPT have brought Transfer Learning to NLP. Fine-tuning these models on specific sentiment analysis tasks can yield impressive results even with limited training data.

Example: Finetuning a BERT model to predict stock market sentiments based on the news heading / summary.

from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline

model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3)
tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis")

nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)

sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.",
"Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.",
"Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.",
]
results = nlp(sentences)
print(results)

Bonus — Aspect-Based Sentiment Analysis

Aspect-Based Sentiment Analysis (ABSA) is a specialized form of sentiment analysis that goes beyond determining the overall sentiment of a piece of text. Instead, it focuses on extracting and analyzing sentiment at a more granular level, specifically with respect to different aspects or entities mentioned within the text.

ABSA breaks down the text into different aspects or entities that are being discussed. These aspects can be anything that the analysis is interested in, such as features of a product, specific topics in a review, or entities in a conversation.

For each aspect or entity, ABSA aims to extract and analyze the sentiment separately. This means that within a single piece of text, there can be multiple sentiment scores corresponding to different aspects or entities.

Example: Product Reviews

Text Data:

  1. “The screen resolution is impressive, but it could be brighter.”
  2. “Brightness levels need improvement, but the display quality is top-notch.”

Aspect-Entity Pairs:

  1. Aspects — “screen resolution,” “brightness”
  2. Aspects — “brightness levels,” “display quality”

Sentiment Labels:

  1. Aspect: Screen Resolution
    Sentiment: Positive (expressed as “impressive”)
  2. Aspect: Brightness
    Sentiment: Negative (expressed as “could be brighter”)
  3. Aspect: Brightness Levels
    Sentiment: Negative (expressed as “need improvement”)
  4. Aspect: Display Quality
    Sentiment: Positive (expressed as “top-notch”)

How does it work?

Conclusion

Sentiment analysis empowers us to uncover hidden sentiments within text data, providing valuable insights in various domains. We’ve journeyed through the history of sentiment analysis, delving into lexicon-based methods and the need for more advanced machine-learning techniques. We’ve seen how machine learning models, like Naive Bayes, can learn the nuances of sentiment in complex text. We also saw the advantages of deep learning for sentiment analysis, enabling us to capture the intricate emotional nuances often hidden in language.

As we continue to advance in the field of sentiment analysis, it’s essential to consider the context and nuances of sentiment expressions. Metrics like precision, recall, and F1-score help us evaluate our models, while aspect-based sentiment analysis takes us a step further by dissecting sentiment in a granular manner. There’s no one-size-fits-all solution, and choosing the right technique depends on the specific task and data at hand.

References

  1. https://www.nltk.org/howto/wordnet.html
  2. https://www.analyticsvidhya.com/blog/2022/03/building-naive-bayes-classifier-from-scratch-to-perform-sentiment-analysis/
  3. https://www.kaggle.com/code/prakharrathi25/sentiment-analysis-using-bert
  4. https://github.com/sloria/TextBlob
  5. https://github.com/yangheng95/PyABSA
  6. https://huggingface.co/spaces/yangheng/PyABSA
  7. https://huggingface.co/spaces/yangheng/PyABSA-APC

To stay up-to-date with the latest trends in the development ecosystem, follow Simform Engineering.

--

--

Kadam Parikh
Simform Engineering

Machine Learning | Deep Learning | Cyber Security | “The best approach to learning is teaching”