Using AI to Automate Decision Making of Elected Officials

Brian Gourd
Computers and Society @ Bucknell
8 min readApr 28, 2020

By Brian Gourd & Katelyn Heuer

Introduction

More often than not, people say that democracy in the United States doesn’t reflect and embody the ideas of true democracy. Many people feel as if politicians say what the people want to hear to get elected but once in office, they don’t keep those promises and do what benefits them and their constituents, not the people who voted for them. In this paper, we aim to discuss the ethics and questions surrounding the implementation of AI systems to be used to make decisions in place of elected officials in the hope of mitigating the power these officials hold. We are not suggesting a total replacement of elected officials with AI, but rather the creation of AI that would work to guide decisions made by these officials, and requiring the use of these intelligent systems to mitigate the biases within the decisions made by politicians. In theory, such an AI could be created with the best interests of the citizens in mind and things such as corruption, lobbying, and other selfish factors that may influence a politician’s decision would be decreased or removed completely.

Like most things, there is a best and worst case scenario for incorporating AI with politics, especially when it comes to the ethics of the situation. For this proposal, the best case appears to be a scenario in which the AI will completely remove all biases in decision making within government. In a perfect world, the AI would also make decisions that reflect the peoples’ desires and not the politician’s desires. If such was the case, then the existence of the politician would no longer really be needed. Unfortunately, this is not a perfect world and an entirely unbiased AI is not a viable option at the moment. On the other hand, the worst case scenario would be that the powerful AI systems being created to guide politicians’ decision making are hijacked by the politicians themselves in order to help further their own agenda. This could be in the form of influencing weights of factors within an AI to create an algorithm that mimics the biases of a certain politician. This completely goes against the professional responsibilities listed in the ACM Code of Ethics, which states that accessing computing and communication resources should only be done when it’s authorized or compelled by the public good [6]. If this were to happen, one way to mitigate this would be by re-evaluating all the decisions that the AI has made or having the AI remake these decisions without the influence of politicians. However, this could be prevented if there was no way for politicians to get access to the AI in the first place.

Case Study: Rep. Scott Perry

To connect this idea to a level local to Lewisburg, we look to Rep. Scott Perry (R) of Pennsylvania. Rep. Perry is the congressman for the 10th congressional district of Pennsylvania, which encompasses Lewisburg. Lobbying is an unfortunate reality in our government. While many would deem this bribery in any other profession, lobbying has existed in our government for hundreds of years. Generally, lobbying is when a person or, more often, a group of people or organization seek to influence the decision of a politician or public official on a certain decision[1]. This influence is oftentimes in the form of a direct monetary donation to the politicians campaign. Lobbying has gotten more complicated as time has progressed with the creation of things like Political Action Committees (PACs). PACs are organizations composed of business leaders, politicians, and other citizens who work to raise money in the interest of getting a politician reelected, and usually represent specific ideological or business interests[2]. The part of PACs representing certain interests is the real problem because a politician oftentimes feels inclined to vote on bills that will favor the interests of the PACs supporting them, rather than the interests of the general public. In the case of Rep. Scott Perry, the two largest contributors to his campaign in the previous year were donations from members of the House Freedom Fund and Club For Growth, both of which have the intended purpose of making a politician vote along party lines (in this case the Republican party)[3]. Organizations like these damage our democracy since they eliminate moderate politicians from being able to get enough funding for their campaigns. It becomes unclear what a politician’s personal views are on certain matters since the only way they can receive the funding they need for reelection is to echo the views that their party possesses, which may not always be their own belief. Additionally, these PACs stop politicians from listening to the voice of the American people, and instead force them to listen to the desires of the businesses and wealthy individuals that make up the PACs supporting them. Due to this, we tend to lose the feeling of this country being a true democracy since many votes cast by political leaders don’t reflect the mindset of their constituents. In the example of Rep. Perry, we will discuss the ratification of the Equal Rights Amendment.

The Equal Rights Amendment (ERA) is a proposed amendment to guarantee equal legal rights for all Americans regardless of sex. You may be confused why it is a “proposed amendment” since a majority of people believe that this amendment passed decades ago. While the ERA was passed by congress in 1972, it needed approval by three-fourths of the states (38 out of 50) in order to be ratified[4]. The deadline for this ratification has been pushed back every time it is hit since it was first passed by congress, when the 38 state minimum had not been hit. Finally, in January of 2020, Virginia approved the amendment becoming the 38th state to do so. While the 38 state minimum had since been hit, the amendment had not yet been ratified by congress by February 12th of 2020, when the vote to extend the deadline was up again[5]. The vote to extend the deadline seems obvious here, at least to the states that have approved the proposal, since after all they support adding the amendment. However, this vote for extending the deadline to ratify ended up going almost perfectly along party lines, with Rep. Perry, whose state of Pennsylvania accepted the proposal in 1972, voted against extending the deadline[5]. This is a prime example of party politics getting in the way of the general public’s voice being heard, since almost every supports equal rights. In this case, PACs, such as the House Freedom Fund, which donate generously to politicians’ campaigns to have them vote along party lines, caused what should’ve been a simple vote to be opposed by nearly half of congress.

The example of the ERA vote is one of many examples. It seems that more often than not, senators, congresspeople, and other political figures vote in a way that doesn’t truly represent the voice of their constituents. This is where an AI could come into play. By allowing artificial intelligence to do what it does best and create a representation of the voices of people these representatives represent, then it can be used by government officials to guide their decision making based on what the people want, not what their PACs want. Unfortunately, the implementation would not be so simple since there is nothing about using this AI that aids them personally in any way. To fix this issue, we propose a few different implementations of this hypothetical AI below.

Possible Implementations

Firstly, we could begin to require that some sort of AI based on data collected from the American people be used as evidence in the decisions made by politicians. By requiring evidence that shows their vote is truly representative of their constituents, the AI could help eliminate the biases that exist within a certain politician’s personal beliefs or the groups they feel inclined to aid with their votes.

Secondly, we could implement AI that weighs the factors influencing a politicians decision and gives their vote a specific weight in a decision such as a passing of a bill based on how accurately they represent the views of their constituents. Politicians who are more in touch with the views of the people they represent are given more of a voice and more power, while those influenced by outside sources like lobbying are given less power in voting. This idea helps solve the issue of an AI not aiding the politicians, since this AI would encourage politicians to do a better job at representing their constituents. Not only are they given more power for doing so, but it would also likely increase their rating for reelection since they people they represent will feel better represented if the official representing them has more power in a vote.

Finally, which is the most extreme option, is to give the voting power directly to the AI. If the AI is a truly unbiased, fair representation of the mindset of the general public, then giving it the voting power should produce completely fair results. The issue that arises here is that an unbiased AI is likely impossible to create since it is representative of the biases that already exist in our society.

Conclusion

As stated in the ACM Code of Ethics, everyone is a stakeholder in computing and especially since politics govern how people live and act, everyone would be affected if AI was used to aid in this high level decision making[6]. United States citizens would be affected since the AI would represent their voices and is intended to benefit them. However, politicians would arguably be impacted the most by implementing this. This implementation would require a complete restructure of how political representation works in this country, and would end up adding much more accountability for their decisions, as well as likely removing their income through lobbying.

While implementing this AI could significantly help many people, there are also some ethical challenges that must be factored into the programming of the AI. One of these challenges is the fact that the AI cannot discern right from wrong or decide what is best entirely on its own. The way the AI “thinks” is dependent on its programmer, and biases that are put into the AI cannot be detected until it is already operating and making important decisions. This clearly goes against ACM’s general ethics which states that computing must be fair and it must take action to not discriminate. Another ethical concern is that it is unclear who is held accountable for mistakes and poor decisions made by the AI. With no one held accountable, the problem can perpetuate itself. Since the purpose of using AI is to make important decisions regarding policies, there is an obligation to make sure that these challenges are eliminated before the AI is put to use.

Bibliography

“What Is a PAC?” OpenSecrets.org, www.opensecrets.org/pacs/pacfaq.php. [1]

The Editors of Encyclopaedia Britannica. “Lobbying.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 28 Feb. 2020, www.britannica.com/topic/lobbying. [2]

“Rep. Scott Perry — Campaign Finance Summary.” OpenSecrets, www.opensecrets.org/members-of-congress/summary?cid=N00034120&cycle=2018. [3]

“Ratification By State.” Equal Rights Amendment, www.equalrightsamendment.org/era-ratification-map. [4]

Perry, Scott. “Legislation.” Vote Record | U.S. Congressman Scott Perry, https://perry.house.gov/voterecord/ [5]

“ACM Code of Ethics.” Code of Ethics, www.acm.org/code-of-ethics. [6]

--

--