Artificial Intelligence in Policy Decision Making (Washington D.C)

Desmond Dinkins
Computers and Society @ Bucknell
9 min readMay 1, 2020

By Galaan Abdissa and Desmond Dinkins

Artificial intelligence (AI) is rapidly developing and changing the way we go about making decisions. Such computational models consist of complex data mining that aims to predict human behavior. The private sector is evolving to become more technologically determined so artificial intelligence could make very bold business decisions, however where does the public government fall on the spectrum? One of the oldest traditions in civilizations is an ordered government where policies are derived from leaders in power. In Washington D.C, the very politicians people vote for office are responsible for making rational decisions to benefit their constituents and the nation’s overall interests. Ideally, these candidates look at accurate datasets, weigh their options, and make decisions however due to political pressure, lobbying from corporate interests, and personal interests, politicians can be skewed to make irrational decisions. One of the main consequences of these factors is the lack of legislation being passed and the inefficiency for the government to make decisions that help its citizens. What if there was a hypothetical artificial system that was able to objectively and efficiently look at all the datasets and factors that go into policy making thus proposing a solution? We will investigate the policy making process in Washington D.C, highlight some of the ethical challenges associated with using AI in making government legislations, and propose various solutions in which government and AI can simultaneously work together to make rational, data-driven policy in efficient time without political influence.

Before one can propose any artificial intelligence model to a process, an understanding of the natural process should be the main priority. The policy making process is a structure that identifies four main stages (problem identification, streams, policy windows and entrepreneurs, and post-policy implementation and evaluation) that breaks down legislation happening in Washington D.C (Perry et. al 5). Humans will always be present in all of these main stages however a suggestion for artificial intelligence models as policy entrepreneurs and as evaluators could perhaps make for more rationality and intelligent policies. The first step in the policy making process is identifying an issue and formulating how some policy for an issue would be on the government’s agenda. This is viewed through three different “streams” where influence for a specific policy resides (Perry et. al 5). The problem stream is how a specific topic is framed for the government to take on policies. This is evident through big data and observing patterns that associate problems with certain agents (Perry et. al 6). Furthermore, effective problem identifications have outcomes that are nonpartisan and policies that don’t skew bias towards one political party or another. Legislation becomes difficult to pass because of the polarization of controversial topics in government so focusing on reliable sources can drive interest past that problem. The politics stream consists essentially of the national perspective and “mood” of a specific topic. Campaigns from nonprofit organizations to media coverage on trending social issues are factors in the politics stream that influence whether or not the government is going to take on that issue (Perry et. al 7). The policy stream is the ideas generated for potential legislation done by policymakers; the stakeholders who are trying to satisfy their local voters. The policy stream consists of policy windows and entrepreneurs who are responsible for weighing all their options and the voices of the larger constituency to make a decision about a policy proposed (Perry et. 8). Businesses similarly have stakeholders who are responsible to generate profits for a company and AI models have successfully managed to analyze customer behaviors and provide insights to businesses (ex. sentiment analysis). In a case study with Twitter, sentiment analysis is being used for brands to understand how certain business decisions impact their customers since “71% of the internet has been used through social media by the consumers” (Rasool et. al 1). If we are to abstractly use this in government, sentiment analysis could perhaps be implemented when a policy is adopted to understand voters’ opinions on specific policies. This accordingly is a potential agent in the post-implementation and adoption stage where policies are iteratively modified and monitored in the public (Perry et. al 11–13). Being able to target those who are directly impacted by policies and improving those policies would gradually remove societal issues. Understanding that problems are very complicated and “nonlinear” (Perry et al. 11) can make behaviors difficult to track simply from a human perspective but if AI models are able to read large amounts of user data very efficiently, policies could become more objective and rational in a faster time frame.

Figure displays Twitter’s Sentiment Analysis Model

Considering some of the benefits that AI in policy decision making would bring to the political realm, it’s integral to examine the various influences that affect the current policy making process. Public policy is a multifaceted and complicated procedure that involves interaction amongst a few different parties, the first of which being public opinion. Citizen gatherings and protests, electoral politics, and other modes of action that influence decision making in the government are a couple ways that the people affect public policy. The state of the economy also weighs into policy decision making, due to how they determine operating and policy conditions for businesses. Advances in technology also affect the business environment, thus indirectly affecting public policy, especially if new tech fosters renewable energy. Energy efficiency obviously helps mitigate environmental harm which is becoming even more of a public concern over the relatively recent years. Additionally, business and interest associations influence public policy as well, collaborating with government officials to push policies that fall in line with the affairs of their businesses (Gitell et al, 3.1). Taking all of these factors into account, one can imagine that it’s completely dependent upon government officials to prioritize these elements in their decision making processes according to their own personal agendas and alliances. Therefore, there is inevitably a gray area of personal interest and subjectivity as they promote certain policies. Best-case, with more data-driven legislation, having artificial intelligence in the policy making process would lessen the uncertainty and personal prejudice around legislation. By utilizing big data and analyzing societal and economical effects of those decisions, government policies become more objectively driven rather than politically influenced (Gitell et al, 3.1). This would help to ensure that public interest or environmental sustainability isn’t overlooked because of political partisanship. Moreover, artificial intelligence would allow for faster implementation policies, simply due to the speed of AI versus human decision making in politics. As a result, politicians would be able to evaluate the ramifications of these policies faster as well.

Although AI in policy decision making does have its benefits, there are a number of ethical risks and increasing public concerns that accompany its implementation in politics and in general. AI systems work with massive amounts of data in order to make accurate classifications and decisions. Thus they must intake hoards of personal information from the public, with their consent. However, there are several ways in which that same information can be used to infringe upon people’s privacy and exploited for ulterior motives of the government and big businesses. One of these methods is by way re-identification and de-anonymization of individuals through their information. Typically, personal information is anonymized when used in datasets. However, AI systems can use the same data provided in order to de-anonymize personal information and identify the individuals with whom it’s associated with (PrivacyInternational.org). Of course this brings concerns of tracking and surveillance of such individuals, as well as other possibilities of misusing their information. Worst-case scenario, identification and decision making by AI systems can lead to biased and discriminatory results and consequences for certain people. Misclassification or misidentification of individuals can lead to disproportionate repercussions for particular groups (PrivacyInternational.org). AI technology is also extremely complex and relatively new to the general public, so its functionality and application usage can be hard to understand for most individuals. This makes it even more difficult for people to challenge or even question results that seem unfair, and it’s difficult to imagine the general public agreeing to the usage of AI technology in the context of political decisions that affect them on such a widespread level if they have little to no idea how these systems actually work.

In order to reduce the risk of data privacy in government in advance, politicians can check the results of the artificial intelligence model with a potential AI policy committee; one that essentially tracks the effects of policy that was derived from artificial intelligence input. Ideally, those on the committee would be responsible to measure the bias in the model or resort to respected computer scientists to improve the AI model. If there is a crisis with the proposed model, politicians can roll back that legislation. Claiming that an artificial intelligence system can fully replace human input in the policy decision making process would be very naive given that we inherently have problems with AI to begin with. Technologists are making strides to reduce implicit biases in datasets and models from a fundamental view before they can be implemented on a large scale in a place like Washington D.C where decisions impact citizens directly. However in order to develop ethical scenarios where AI and politicians work together, politicians have to be adept with artificial intelligence and understand that these systems can benefit the way they look at large sets of data and constituents’ behaviors. One of these projects perhaps include Washington D.C using sentiment analysis to track constituent’s behaviors to evaluate a policy after it is implemented (stage 4). This would in turn not have an impact on direct legislation by AI but give politicians instant feedback about the policies derived and call for immediate changes to passed policies. This would be ethically sound given that humans and technology can coexist in the policy decision making process without having AI completely control or humans completely control the process. Mutual collaboration between technologists and politicians can be effective and Washington D.C is in the early steps of this technological advancement.

Therefore, in order for AI in policy making to be implemented in such a way that it accounts for these ethical concerns, the data professionals that work on it must consider such concerns as their utmost priority within their ethical obligations to the public. These same morals that they should uphold in development of evaluation of this system are outlined in ACM’s Code of Ethics. In their general ethical principles, they disclose that a computing professional should contribute to society and to human well-being, acknowledging that all people are stakeholders in computing, avoid harm, be honest and trustworthy, be fair and take action not to discriminate, respect the work required to produce new ideas, inventions, creative works, and computing artifacts, respect privacy, and honor confidentiality (ACM Code of Ethics, 1). By closely adhering to these principles, data professionals can alleviate anxiety around the ethics of AI in politics. Any kind of disparate impact would be a resulting from the data, if it were disproportionately taken, leaving a certain group underrepresented or overrepresented.

References

[1] Factors That Influence Public Policy, saylordotorg.github.io/text_the-sustainable-business-case-book/s07–01-factors-that-influence-public-.html.

[2] Perry, Brandon, and Risto Uuk. “AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.” Big Data and Cognitive Computing 3.2 (2019): 26. Crossref. Web.

[3] Rasool, Abdur, et al. “Twitter Sentiment Analysis: A Case Study for Apparel Brands.” Journal of Physics: Conference Series, vol. 1176, 2019, p. 022015.

[4]“Artificial Intelligence: Privacy International.” Artificial Intelligence|Privacy International, privacyinternational.org/learn/artificial-intelligence.

[5]“Code of Ethics.” ACM Ethics, 23 Sept. 2019, ethics.acm.org/code-of-ethics/.

Galaan Abdissa and Desmond Dinkins are both undergraduates at Bucknell University studying Computer Science. We decided to take on the topic of artificial intelligence and policy decision making because Desmond is from D.C and the intersection between technology and government is a growing field that has the power to impact everyday citizens.

--

--