More than Just Words: Unfolding Presidential Rhetoric

Adi Sidapara
7 min readOct 13, 2016

--

Natural Language Processing Sentiment Analysis of Rhetoric from the Second Presidential Debate

Credits to Joe Heller

Preface

American politics has come to a crossroads of embittering polarization and sardonic hypocrisy. With the ease-of-access to technology, our presidential candidates’ Twitter-itchy fingers carefreely post crude and lewd tweets and rock the foundations of political courtesy and decency that much of the world once looked upon as model election behavior. This fierce election which is (unfortunately) closely contested has seen both sides utter irrevocable rhetoric, with Trump referring to Mexicans as “rapists” and “criminals” and Hillary referring to Trump’s supporters as “irredeemable deplorables”. Certainly, no side is right — this culture of blatant invective has trickled down to everyday Americans. And with increased discourse on the community-sensitive topic of race relations, this election’s harsh words has only deepened the divisions among ourselves at a time when they need to be mended. This is why it is of the utmost importance to evaluate this election’s rhetoric, as it already has set precedent for future generations.

Introduction

In an increasingly data-driven world, machine learning has risen as a primary method of understanding and utilizing data for human good. A subfield of machine learning is natural language processing (NLP), and Algorithmia defines it as “a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way.” By parsing sentences for their atomic elements, NLP allows us to performing numerous tasks, one being sentiment analysis, computationally categorizing opinions and determining if they are positive, negative, or neutral to a topic.

Moreover, American politics for long has been branded as “negative politics”. Gallup found that most voters are discontent with negative campaigning. To determine to what extent this year’s election was negative, I performed naive statistical sentiment analysis on public campaign rhetoric of both candidates to compare and determine which candidate engaged the most in creating a negative political scene. I looked at the 2nd presidential debate as the foundation for my evaluation because it involved both candidates and engaged a larger audience then most individual campaign events.

Methods

In order to data-mine, I built datasets for both Clinton and Trump based on their respective portions in a publicly available transcript of the debate made available by Fortune. These were tokenized into sentences, and one interesting observation was that Trump had significantly more sentences than Hillary, exactly 1.9x more.

This largely reflects on Trump’s rather short sentence construction (which was incomplete often times).

The sentiment analysis library I used is Syuzhet, a package for R. According to the author, Matthew Jockers, the library parses sentences to determine “emotional shifts” in sentiment rather than shifts in topic (a popular method). The package itself uses 4 sentiment dictionaries, which it refers for keywords and their respective scores.

The Syuzhet package attempts to reveal the latent structure of narrative by means of sentiment analysis. Instead of detecting shifts in the topic or subject matter of the narrative (as Ben Schmidt has done), the Syuzhet package reveals the emotional shifts that serve as proxies for the narrative movement between conflict and conflict resolution.

This is consequently used to generate a score, with the sign determining positive/negative sentiment and the number signifying a “magnitude”.

> library(syuzhet)
> test.string <- "I love pie."
> // package method that parses large strings for sentences and
> //returns as a character vector
> tokenized <- get_sentences(test.string)
> test.sent <- get_sentiment(tokenized)
> test.sent

The test code above resulted in the following sentiment value.

[1] 0.75

For a negative sentence construct, the method returns a negative value.

> library(syuzhet)
> test.string <- "I hate pie."
> // package method that parses large strings for sentences and
> //returns as a character vector
> tokenized <- get_sentences(test.string)
> test.sent <- get_sentiment(tokenized)
> test.sent
[1] -0.75

And to test the method with complex sentences, I used one more test.

> library(syuzhet)
> test.string <- "I love myself like Kanye loves Kanye"
> // package method that parses large strings for sentences and
> //returns as a character vector
> tokenized <- get_sentences(test.string)
> test.sent <- get_sentiment(tokenized)
> test.sent
[1] 2.25

Seems legit, right? Well, no. The shortcomings of the method arise when evaluating double negations.

> library(syuzhet)
> test.string <- "I do not hate pie."
> // package method that parses large strings for sentences and
> //returns as a character vector
> tokenized <- get_sentences(test.string)
> test.sent <- get_sentiment(tokenized)
> test.sent
[1] -0.75

This hurdle did not affect the results too much as people rarely use double negations.

Results

Density Distribution of Hillary Clinton’s Sentiment Scores
Density Distribution of Donald Trump’s Sentiment Scores

Assuming that anything below 0 is negative, anything above is positive, and anything at 0 is neutral, the sentiment score distribution for Trump and Clinton was interesting. Donald Trump had a median at 0, but after accounting for his outliers on the extreme negative, nearly 63% of his sentences scored negative. Hillary Clinton had a median at 0.25, and accounting for her negative outliers, around 47% of her sentences scored negative. This means that Trump was 1.34x more negative in the debate solely based on rhetoric.

Box Plot of Sentiment Score Distribution

The box plot offers a comparative display of the sentiment score distribution for each of the candidates. Trump had 26 outliers and Clinton had 16 outliers, indicating that Trump’s rhetoric fluctuated to both extremes more than Clinton’s. Out of these outliers, 9 were extreme positives and 7 were extreme negatives for Clinton, whereas for Trump, 12 were extreme positives and 14 were extreme negatives. In general, Trump’s more extreme sentiment rhetoric tended to be negative whereas Clinton’s was positive.

I further explored the meaning of the sentiment scores by looking at the exact sentences which scored the highest/lowest for each of the candidates.

Clinton’s most positive sentence has a score of 3.40.

“Obviously, Medicare, which is a single-payer system, which takes care of our elderly and does a great job doing it, by the way, and then all of the people who were employed, but people who were working but didn’t have the money to afford insurance and didn’t have anybody, an employer or anybody else, to help them.”

Trump’s most positive sentence has a score of 2.90.

“But I want to do things that haven’t been done, including fixing and making our inner cities better for the African-American citizens that are so great, and for the Latinos, Hispanics, and I look forward to doing it.”

Hillary’s most negative sentence has a score of -2.75.

“There are children suffering in this catastrophic war, largely, I believe, because of Russian aggression.”

Trump’s most negative sentence has a score of -2.50.

“You know, when we have a world where you have ISIS chopping off heads, where you have — and, frankly, drowning people in steel cages, where you have wars and horrible, horrible sights all over, where you have so many bad things happening, this is like medieval times.”

Interestingly, the most positive scoring sentences were generally optimistic outlooks of policies and views regarding the constituents, whereas the negative scoring sentences were regarding foreign hostilities.

Scatterplot of Sentiment Scores (Blue is Clinton and Red is Trump)

Conclusion

Natural Language Processing and Sentiment Analysis are only another approach to evaluating the respective candidates based on the attitudes they have about America, the world, and life in general. For those like me who like to quantify and distill observations into whole truths, sentiment analysis offers metrics to compare the attitudes and evaluate rhetoric with a precise understanding. Based on the analysis of the second presidential debate, there are three conclusions that can be made:

  1. Trump used significantly more sentences than Clinton, which were often times incomplete and missing either a subject or predicate.
  2. Trump was more negative than Clinton throughout the debate. However, Clinton said the most negative sentence in the debate.
  3. The candidates used extremely positive rhetoric when discussing policies and plans for the American people, but they used extremely negative rhetoric when discussing war and foreign conflict.

The method I used was rather naive, evaluating individual sentences rather than contextualizing. In order to fix problems like double negation in sentences and in larger in-context cases, another method I will explore is parsing sentences to build large binary trees. Based on the leafs and nodes of these trees, I can assign a score individually and use the handy power of multiplication between two nodes or leaves to eliminate the problem of double negations.

Credits to Yash Pershad for editing this mess and giving it a piercing title.

--

--