The most Trumpian and Clintonesque moments in the debate (according to a computer)

Let’s teach a computer to guess who-said-what in the first US presidential debate between Hillary Clinton and Donald Trump. (Methodology at the bottom.) This is a way of finding out which moments the candidates were most like themselves — as well as when they were most like Bernie Sanders or Ted Cruz.

Our machine learning model doesn’t know about shoulder-shimmies but it does know all of the words Trump and Clinton used in the debates with Sanders, Cruz, O’Malley, Rubio, etc. Based on those patterns, it takes a look at the presidential debate and says this is the most Clintonian:

and i would also do everything possible to take out their leadership. i was involved in a number of efforts to take out al qaida leadership when i was secretary of state, including, of course, taking out bin laden. and i think we need to go after baghdadi, as well, make that one of our organizing principles. because we’ve got to defeat isis, and we’ve got to do everything we can to disrupt their propaganda efforts online.

And here’s what the model is most convinced is the most Trumpesque language:

as far as the cyber, i agree to parts of what secretary clinton said. we should be better than anybody else, and perhaps we’re not. i don’t think anybody knows it was russia that broke into the dnc. she’s saying russia, russia, russia, but i don’t-maybe it was. i mean, it could be russia, but it could also be china. it could also be lots of other people. it also could be somebody sitting on their bed that weighs 400 pounds, ok?

The model only knows about words and short phrases. So in the Clinton excerpt, the model is persuaded by phrases that Clinton used often with Sanders and O’Malley and which none of the other candidates use nearly as much:

  • we’ve got (Clinton 58 total uses in all debates, Trump 3)
  • i think (Clinton 313, Trump 144)
  • go after (Clinton 30, Trump 0)
  • need to (Clinton 104, Trump 7)
  • we can (Clinton 85, Trump 19)
  • efforts (Clinton 21, Trump 0)
  • do everything (Clinton 24, Trump 2)

Clinton’s key phrases often involve very specific issues (comprehensive immigration reform, the affordable care act, clean energy). But even though there are concrete nouns like bin Laden and Baghdadi in this excerpt, they don’t occur all that often (bin Laden was only mentioned nine times total in the Democratic/Republican debates and Baghdadi never was).

The phrases that convince the model that the Trump excerpt is his are:

  • don’t (Trump 312 total uses across all debates, Clinton 129)
  • china (Trump 74, Clinton 16)
  • as far as (Trump 34, Clinton 4)
  • i mean (Trump 44, Clinton 6)
  • ok? (Trump 26, Clinton 0)
  • better than (Trump 20, Clinton 0)
  • clinton (Trump 39, Clinton 4)

While you and I know that Trump routinely insults people, the model doesn’t know anything about it — Trump didn’t mention 400-pound people in previous debates (although he does mention sitting a surprising amount).

Nevertheless, the model picks up on some of Trump’s key themes and ways of phrasing things. Note that his passage has a lot of russia in it. The model isn’t particularly influenced by that — it is a signal of Trump but a fairly weak one (china is about 12 times stronger).

The second and third most Clintonesque passages:

right now, that’s not the case in a lot of our neighborhoods. so i have, ever since the first day of my campaign, called for criminal justice reform. i’ve laid out a platform that i think would begin to remedy some of the problems we have in the criminal justice system.

And:

the gun epidemic is the leading cause of death of young african-american men, more than the next nine causes put together. so we have to do two things, as i said. we have to restore trust. we have to work with the police. we have to make sure they respect the communities and the communities respect them. and we have to tackle the plague of gun violence, which is a big contributor to a lot of the problems that we’re seeing today.

The second and third most Trumpian segments of the debate:

thank you, lester. our jobs are fleeing the country. they’re going to mexico. they’re going to many other countries. you look at what china is doing to our country in terms of making our product. they’re devaluing their currency, and there’s nobody in our government to fight them. and we have a very good fight. and we have a winning fight. because they’re using our country as a piggy bank to rebuild china, and many other countries are doing the same thing.

And:

well, i told you, i will release them as soon as the audit. look, i’ve been under audit almost for 15 years. i know a lot of wealthy people that have never been audited. i said, do you get audited? i get audited almost every year.

Of course, models get things wrong. Here’s the thing that Clinton said that the model most thought Trump would’ve said — note that it’s about his business, so it’s like she’s taking words out of his mouth:

we have an architect in the audience who designed one of your clubhouses at one of your golf courses. it’s a beautiful facility. it immediately was put to use. and you wouldn’t pay what the man needed to be paid, what he was charging you to do…

And here’s the item that was really Trump but which the model really thought was Clinton (among other things, she does tend to use the word important a lot):

because i want to get on to defeating isis, because i want to get on to creating jobs, because i want to get on to having a strong border, because i want to get on to things that are very important to me and that are very important to the country.

The model was trained not just to identify Clinton and Trump but to also identify their major debate partners. So we can also ask, “What was the most Bernie Sanders moment?” And “Who channeled Ted Cruz when?”

The model is convinced this was Sanders (it was Clinton):

nine million people-nine million people lost their jobs. five million people lost their homes. and $13 trillion in family wealth was wiped out.

And here’s the most Cruzian moment (it was Clinton):

well, i think you’ve seen another example of bait-and- switch here. for 40 years, everyone running for president has released their tax returns. you can go and see nearly, i think, 39, 40 years of our tax returns, but everyone has done it. we know the irs has made clear there is no prohibition on releasing it when you’re under audit.

The overall top words for the candidates

While I like picking out the moments in the debate where they were most/least themselves, we may want to know their overall favorite phrases and topics. Here, I’m restricting myself to one-, two-, and three-word phrases. Note that Donald Trump vastly prefers monosyllables, so you’ll see a lot more of those in his list.

Methodology

I grabbed all of the debate transcripts from The American Presidency Project and divided them into a training set (all the Republican and Democratic debates). For the training, I only kept the people with the greatest contributions, among the Republicans that included Bush, Carson, Christie, Cruz, Kasich, Rubio in addition to Trump. Sanders, O’Malley and Clinton were the Democrats included. Blitzer, Cooper and Tapper had enough turns/words to also be put in the model. So this division isn’t actually a two-way division, it’s a 13-way division. You know, to keep things spicy.

The test set was the first debate between Clinton and Trump (I didn’t include Lester Holt’s speech in the training nor did I evaluate it in the test).

I tried a variety of algorithms and a few different definitions of ngram features. The best turned out to be a multinomial Naive Bayes model with bigrams. The Presidency Project arranges its transcriptions into paragraphs, so a given speaker’s “turn” can be broken into multiple pieces. Overall the model was trying to get 167 Clinton speech events and 241 Trump speech events right. I’m going to keep calling these “turns” even though that isn’t quite right.

As you can tell above, I let the model predict all 13 major speakers from the previous debates because I wanted to find moments where the candidates seemed to talk like others. It’d be silly to think anyone always talks like themselves. We are polyphonous.

But in the 13-way task, precision is 0.80, recall is 0.57. If the model says it’s Clinton or Trump, it probably is, but with so many other people to guess, it does classify our two candidates as other people.

If we said, “just predict Clinton or Trump”, then precision jumps up a few points for both candidates and recall goes up a lot: 0.75 for Clinton, 0.88 for Trump.

It’s also the case that the longer the text is, the better the model does. Clinton’s average turn was 38 words, Trump’s was 35. The 136 turns with 20 words or fewer are only right about 70% of the time, while 272 turns in the 21–117 word-range are correctly classified 90% of the time.

For the infographic, I compared Monroe et al’s log-odds approach to characterizing two different discourses with the predictions from a 13-way classification task. The relevancy scores reported are really absolute values of z-scores using the Monroe et al technique (see also slide 63 here) but I only kept items that were also correctly predicted by the 13-way classification task. This has the sad effect of dropping my favorite Trumpesque phrase, believe me since the 13-way classifier basically had Trump and Bernie Sanders tie if you gave it just that phrase — that’s not because Bernie uses believe me, but because he uses believe and me a lot separately and this particular model isn’t smart enough to know enough to weight the bigram a lot more than the two unigrams.