How We Used Neural Networks to Understand Congress

Anastassia
FiscalNoteworthy
Published in
14 min readJul 9, 2018

If you’re a political TV junkie, you may have come across an episode (or several) of West Wing or House of Cards where the protagonists scramble to get enough votes for a key piece of legislation. They use a whiteboard to count Yes and No votes and meet discreetly with legislators to arrange backroom deals. Determining how Congress will vote (also know as constructing a whip list) may make for entertaining drama, but it is in fact a routine task for many real-world political advocates and journalists.

While researchers typically construct whip lists via phone calls to congressional offices and analysis by experienced pundits, political and computer scientists have, meanwhile, been interested in modeling voting behavior through statistical analysis since the 1960s. These data-driven approaches to vote prediction are appealing for several reasons: they can be used to dissect individual votes faster, to prioritize controversial bills, and to observe broader patterns in legislator behavior.

In FiscalNote’s new paper, Party Matters: An Embedding Model of Roll Call Votes, we combine traditional analysis of bill text with information about the party of the bill sponsors to predict U.S. Congress vote counts. Our technique allow us to more accurately predict individual votes than existing methods and will be presented later this month at ACL 2018, a premier Natural Language Processing conference.

Electronic Voting System in Congress

To explain our state-of-the-art approach, we first dive into some background from political and computer science. After we will show how our method can be used to assign legislators ideology ratings, and how the topic and sponsor composition of a bill affect the outcome of a vote.

Background: Behind the Vote

Individual votes are often affected by hard-to-quantify variables, such as pressure from lobbyists or quid-pro-quo negotiations (i.e., “you vote Yes on my bill, and I will support yours”). Still political scientists have been able to identify several different concrete factors that influence how a legislator will vote — two of the most notable being the ideology of the legislator(s) sponsoring the bill and the actual subject matter being legislated.

Political ideology describes a general set of beliefs and values and is typically presented as a scale ranging from “liberal” to “conservative.” Legislators usually sponsor bills that propose changes consistent with their ideology, and others will usually vote in favor of bills that are aligned with their own views. In other words, we can assume that a legislator will vote Yes if their ideological position on the topic in question is close on the spectrum to the ideology of the proposed legislation. In the earliest studies on voting data, political scientists quantified the positions of legislators on this spectrum by comparing their votes on different bills.

Ideal Points Line

This catch-all linear analysis is appealing for its simplicity, but it fails to adequately capture the nuance and complexity that (we would hope) characterizes our legislators’ views. A superior approach envisions several different ideological spectrums for every legislator — one for each policy area around which they may vote or legislate. One prominent example of the usefulness of this model emerged during the 2016 Democratic presidential primaries, as Bernie Sanders emerged as very liberal compared to Hillary Clinton on various economic issues (e.g., student loans and taxes) but more moderate on gun control. If we were only to give Sanders, currently a senator representing Vermont, a single static ideology rating, we would be conflating his economic and gun control positions — thereby risking inaccurate predictions on student loan forgiveness or background check legislation that may land on the Senate floor.

Once we’ve identified the major factors that affect whether a legislator will vote Yes or No on a given bill, we turn not to phone calls or qualitative research, but to computationally driven analysis using natural language processing and machine learning, to predict their behavior automatically.

Background: Natural Language Processing

Natural language processing (NLP) is a subfield of Artificial Intelligence, with a focus on computational approaches to processing human language.One application of NLP techniques is to model the subject matter of any text by identifying linguistic and semantic patterns. For example, a bill regulating energy use may mention coal or natural resources. Computers can identify trends like this one via observation across a large number of documents and use patterns to assign topics to any new bill. Your email provider uses similar techniques to differentiate spam from legitimate messages, and NLP is common in a variety of other industries as well.

But language can do more than suggest relevant subject classification; word choice is often a reflection of ideology itself. Obamacare vs. Affordable Care Act is a now-famous example of how Republicans seek to highlight President Barack Obama’s footprint while Democrats emphasize the end goal of cutting costs. Computers can successfully identify ideology based on patterns like this one when they are present on Twitter or in books, but they struggle when it comes to legislation because the language patterns tend to be far more subtle. For example, the difference between a liberal and conservative proposal may be as simple as increasing or decreasing funding to a certain program

Background: Sessions of Congress

The US Congress operates in 2 year sessions. Each term is followed by an election cycle for the House and for one third of the Senate. Previously, computer scientists hypothesized that the text would be sufficient to capture both signals: the subject matter and ideology of a given bill. While their initial results were promising, their analyses were typically conducted separately for each session of Congress. We discovered that this seemingly straightforward decision had major implications for the robustness of the resulting models.

Consider the aforementioned spam filter example: If you looked at 1000 spam and non-spam emails from April, you could probably identify a number of patterns that distinguish the malicious messages. Later, if I showed you another 1000 emails from May, the patterns you find would probably not change significantly. Both sets would probably include random CAPS, dollar signs ($$) and curious attachments.

The same can not be said about the patterns within congressional bills, however, for one key reason: the party in power changes between sessions. While we may aspire to bipartisan ideals, in reality, about 95% of bills that receive a floor vote have at least one sponsor from the party in power. The primary sponsors of these bills are more evenly split between two parties, which shows that working with the party in power in essential, if you want your bill to see the light of the floor.

In 2011, when Republicans controlled the Senate, prior models could expect that most bills making it to the floor had a conservative lean — a change from the year before when the Democrats were in power. So, if the model saw Lucy Lawmaker vote Yes on several bills about coal mining in 2011, it would infer that she has conservative views on this issue.

At FiscalNote, our goal is to predict votes across sessions; otherwise, we would need to revamp our model and start from scratch every two years. Thus, to adequately identify ideology, we had to look for clues beyond bill text. It turned out that a simple signal would suffice: the party affiliation of the bill sponsor(s). While claiming that you are a Republican or Democrat does not fully capture the spectrum of liberal and conservative ideology, it is a useful proxy. By considering the parties of the sponsors together, we can assign bills an ideology: a bill with three Democratic and three Republicans sponsors would fall in the middle, while one with four Democratic sponsors and one Republican sponsors would lean more liberal, etc. With this approach in mind, our objective became clear: to create a model combining both bill text and bill sponsorship. To make that happen, we employed machine learning.

Background: Machine Learning

Machine learning is the field of computers “learning” patterns from existing data and applying those patterns to new data. Data sets consist of pairs of inputs and outputs. For example, an input might be the words in a bill, and the output might be its topic. By looking at a large number of bills, the computer can learn that the word oil is associated with energy-related bills, school is associated with education bills, and Medicaid is not directly related to either topic. In computer-speak, the patterns are expressed as mathematical formulas.

In a simple algorithm designed to identify education-related bills, the computer may assign a score to every possible word based on available data. The score for each word represents the impact that word has on the likelihood of the bill being related to education.

Table of Education Scores

Then, the machine can decide whether a new bill is about education according to the following procedure:

First, it will count each word in the bill:

Then, the machine can multiply the number of occurrences of each word in the bill by the assigned “education” score for that word:

Finally, the computer will classify the bill as falling under the “education” topic if the score is above a certain threshold (e.g., 100):

To create a vote projection for a given legislator on a given bill, we can use a more sophisticated version of the principles outlined above. Our voting data set consists of the following inputs: the words of the bill, the percentage of Democrat and Republican sponsors, and the ideology of the voting legislator. The output, meanwhile, is simple: a Yes or No vote. Unfortunately, assigning each input a score and adding them up like in the above example is no longer sufficient given our disparate inputs and the complexity of linguistic patterns. But using Neural Networks, a type of machine learning algorithm, we can model the inputs and relationships between them using multi-dimensional representations and complex non-linear formulas. This stands in contrast to the algorithm described above, which was limited to addition and multiplication of simple numbers.

Instead of tagging each input word with a simple frequency count, here the computer will use vectors (i.e., lists of numbers) to capture subtle patterns in language that a human reader might have missed. Below is a visual representation of potential vectors:

Word Vectors

Similarly, we represent the effect of the party affiliations of the sponsors and the ideology of the voting legislator as a vector as well. The component values of these vectors are learned from the data, just as the word scores were before. After these resulting input vectors are created, the neural network can combine them through a variety of complex operations to produce a final score.

Approach: Putting It All Together

Our method works as follows:

First, we create a bill text representation by averaging the vectors of each word contained in the bill.

Then, we combine one copy of the text vector with the Republican sponsor vector and a second copy with the Democrat sponsor vector to represent each party’s influence. Our reasoning here comes from the observation that the text vector (pictured above) will capture the essence of gambling but won’t capture whether the associated regulations are strengthened or weakened (i.e., which “side” of the issue the bill supports). However, parties usually have a fixed attitude on the issue of gambling (or other subjects), and the sponsor vector accounts — at least indirectly — for that influence.

We have now created separate Republican and Democratic vectors representing the bill, but each party’s copy does not have equal influence. As a result, we then combine the two vectors in a manner proportional to the number of sponsors from each party. If 30% of the sponsors are R and 70% are D, we multiply each vector by those percentages and add them together, creating a final bill vector — one that captures both ideology and topic. Most importantly, we can then compare the final bill vector to the voting legislator’s vector to determine if the two are similar enough to warrant a Yes vote. The final setup looks like this:

Our new technique for representing bills allows us to predict votes more accurately than previous approaches. Because legislation is often written in so-called “legalese” (i.e., a formal and archaic manner that requires subject matter or lawmaking expertise), many linguistic patterns will remain hidden in plain sight unless revealed by computer analysis.

Results: Vote Prediction

To train this model, we first collected all votes cast between 2005 and 2012 in the US Congress. Then, we excluded all unanimous votes, because they typically occur only on uncontroversial, non-ideological legislation like memorials or resolutions. The resulting dataset consisted of around 2,000 bills and 600,000 individual votes. The model was tested on a held out, unseen during training, set of bills from the same period, as well as bills from 2013–2016; the later set was added to study how generalizable our model is when it comes to more recent legislative sessions.

Our model was able to predict 86% of votes correctly in the 2005–2012 test set. As a means of comparison and evaluation, approximately 68% of votes cast between 2005 and 2012 were Yes votes, meaning that a simple model that always predicted Yes would result in only 68% accuracy. Similarly, we predicted 84% and 72% correctly on the 2013–2014 and 2015–2016 sessions, with respective baselines of 66% and 61%. These accuracy scores are several percent better than previous state-of-the-art machine learning approaches.

The most recent session is a curious outlier: our model’s performance relative to more basic approaches was far less impressive than in previous years. This difference can most likely be attributed to the rise of the Freedom Caucus, a conservative voting bloc that gained influence during this time and eventually assumed party leadership. Our model, relying on its knowledge of Congress through 2012, was not able to anticipate this novel development. This underscores both to a limitation of machine learning (i.e., we train our models using a sample of historical data) and the general importance of staying up-to-date with political dynamics even when relying on more objective, data-driven approaches to legislative analysis. In practice, we can strengthen our models by retraining them as new data becomes available.

Analysis: Legislator Ideology

In addition to being more accurate than prior data-driven vote prediction attempts, our model yields interesting insights regarding the relative ideological positions of prominent legislators. The legislator vectors from our model, for example, can be used as a proxy for legislator ideology. When we reduce these vectors to a single dimension, the positions of a few key legislators appear as follows:

Some of the positions on this chart reflect common sense intuition: for example, both Murkowsi and Collins are known to take on centrist positions. On the other hand, Rand Paul appears to be less conservative than even the average Republican; it turns out his brand of conservatism is better captured in two dimensional space. While ideal points are able to capture general positions of most legislators, they may miss nuances of certain individual positions, especially for people with atypical political perspectives. The full chart of ideal points can be found here. These scores only reflects positions from 2005–2012, so more recent developments may be absent.

Analysis: The Party Effect

Another offshoot of our model is the ability to envision how a given bill would fare with different hypothetical proportions of Republican or Democratic sponsors. To illustrate, we take two bills from the current session and use their summary texts to predict how the legislators would vote given different sponsor compositions (thus ignoring the actual sponsors). We test two scenarios: one where all the sponsors are Democrats and one where they all are Republicans. Note: while we test current session bills, we use the 2005–2012 legislators, so more recent additions will be absent; we will, also, miss the effects of the recent Freedom Caucus.

Our first bill is Healthcare Market Certainty and Mandate Relief Act of 2017 (HR 4200). This is the now notorious bill that suspended that reversed parts of the Affordable Care Act, including suspending the individual mandate. We present the positions of the legislators in our model in the chart below:

The farther up you are, the more likely you are to the support the Democrat-sponsored bill; the further to the right you are, the more likely you are to support the Republican-sponsored bill. This chart shows that most Democrats will most likely support the bill sponsored by their own part and have negative opinions on a Republican version. The opposite is true for the Republicans. This observation alone may not be surprising, but it is important to consider how far apart the two parties are. The distribution suggests a strong polarization, as is expected on an issue like this.

To better understand the model, we look at the some of the policy terms in this bill: abortions, insurers, market and mandate. These are associated with controversial issues. One can imagine that the liberal version of the bill would have extended instead of suspending measurers related to these policy terms.

For comparison, we consider a second Healthcare related bill: Ensuring Patient Access to Healthcare Records Act of 2017 (HR 4613)

This chart presents a far less radical distribution. While legislators prefer to vote with their own party, they have more positive views on the opposing bill. The terms associated with this bill are transparency, clinical, information, technology. We hypothesize that this is a less radical issue, so the differences between parties are less pronounced.

The two charts demonstrate two effects: First, the fact that the legislators have different values on the two axis in each chart suggests that the party composition of the sponsors affects their position. Second, the difference in polarization between the two charts shows that the policy matter at hand, also, has an effect of their positions. Our approach is superior to previous algorithms because we are able to consider both of these factors.

FiscalNote’s foray into legislator vote prediction began with a common sense assumption: that legislators’ voting preferences are guided by their own ideology, as well as both the subject matter and ideological tilt of the bill in question. Using NLP and Neural Networks, however, we have been able to structure, automate, and test an advanced approach to vote prediction that both yields accurate results and lays the groundwork for a range of other interesting research projects surrounding legislator ideology, sponsor selection, and bill passage.

To those accustomed to phone calls to congressional offices and networking on the Hill, relying on computer algorithms and data scientists to predict the outcome of legislation is an understandably uncomfortable proposition. And capturing the intricacies of lawmaking — the under-the-table deals, newly formed caucuses, and private bargaining — will surely remain a major challenge for those seeking to use computers to model the workings of our governments. Yet both computer and political scientists can achieve the most robust results only by working together: the best models have and will continue rely on an in-depth understanding of political behavior and psychology, while political analysts, too, can become more efficient by allowing computer analysis to handle the routine questions and focusing instead on the more subtle human factors that shape national policy.

--

--

Anastassia
FiscalNoteworthy

Ramblings on Data Science, Mental Health and Life