Can Context Extraction replace Sentiment Analysis?

Manas Ranjan Kar
NLP Wave
Published in
2 min readOct 15, 2015

Sentiment analysis is hard. Most of the systems on the market will clock anywhere around 55–65% for unseen data, even though they might be 85%+ accurate in their cross-validations.

A couple of reasons why creating a generic sentiment analyser is tough;

- There is too much variation in texts across domains, leading to different meanings

- Identifying sarcasm and combination of phrases like, ‘not bad’ is not equal to ‘not’ AND ‘bad’

At this juncture, it’s important to realize that sentiment analysis is critical for any system monitoring customer reviews or social media posts. Hardly had the business world caught up with a sentence level sentiment analysis, we are now moving to aspect level sentiment analysis — more directed & granular, adding to the complexity. The question is this — can we do something to augment our sentiment analysis?

For the past few months, I have been using context and relationship extraction to augment sentiment analysis. I treat them as important meta-information to use either as learning features and/or augmented information for my customers.

I use 4 important ‘context’ to identify a target sentence;

- Entities like a location, name, person

- Keyphrases

- Relationships

- Topic/Concept

To aid in extracting these information, I have created and modified own generic lexical parser, relationship extractor and a topic model. I take help of DBpedia API to extract entities.

Let me demonstrate with an example;

Sentence

Pakistan’s army Chief General Raheel Sharif has said his troops are ready to tackle any long or short misadventure by the “enemy”

Entities:

[‘Chief General’, ‘Raheel Sharif’, ‘Pakistan’]

Keyphrases:

[‘pakistan’, ‘chief general raheel sharif’, ‘troops’, ‘short misadventure’, ‘enemy’]

Relationships:

(Sharif,said), (army, ready), (army,tackle)

Topic/Concept:

Unrests & War

Does context before sentiment make sense?

Ideally, the above sentence denotes a negative sentiment. While some APIs might identify it correctly, some may still end up tagging it neutral/positive. However, if you create a model to feed in your context, or at the very least provide them as additional information to the user (either as tooltips/export), this can augment sentiment analysis in a big way.

- Some topics will almost always have a negative sentiment.

- Troubled entities will generate negative news.

- Keyphrases do a good job of pinpointing intent when combined with relationships

With the ferocity of new data being generated everyday, it’s either not useful to always use standard datasets (like IMDb or polarity datasets) for training sentiment models, or too expensive to create your own training sets.

For some, these assumptions may seem naive — but it has worked in more than once for multiple NLP projects I worked on — either increasing model accuracy or being a validation layer for analysis. Intuitively, adding contextual information to your corpus makes sense.

I would be very interested to know your thoughts on my assumptions and overall process flow. Context Analysis can augment, if not replace, sentiment analysis.

Your thoughts?

Image sourced from:

--

--