Attention-Based LSTM for Target-Dependent Sentiment Classification

by Vahid Naghashi, Ege Berkay Gülcan, Oğuzhan Çalıkkasap


An attention-based bidirectional recurrent neural network is presented to improve the target-dependent sentiment classification.

This method learns the alignment between the target entities and the most distinguishing features.Based on the experimental results this model achieves state-of-the-art results.


Recurrent neural networks such as LSTM have shown high performance in capturing sequential relationships in a sequence. Bi-directional LSTM captures both forward and backward relationships among the words of a sequence. In this paper, the authors have used bi-directional LSTM network to capture the long term dependencies among words of a sequence. Then, the encoded outputs are fed to another neural network to calculate the weights for each word in the sequence. These weights show the attention given to each work in the classification process.

The weights are adaptive to the content of each time step, which makes it possible to assign large weights to the “distinguishing” words. On the other hand, unlike the models which only use the last hidden state (or
mean pooling), the attention based model has no difficulty capturing long sequence, since it considers different word locations in a relatively even manner. This allows the model to cope with the situation when the input sentence is long and the target string is far from the most distinguishing features.


They have used Twitter conversation dataset which contains 6248 tweets as training set and 692 tweets as test set. For the dataset, data preprocessing is performed. We remove non-alphabet characters, numbers, pronouns, punctuation and stop words from the text. They have set the hidden layer size as 500 units and compared their results with other state-of-the-art algorithms. The attention based LSTM gives the best results in classifying the sentiments.