Op-Ed: The Risks and Opportunities of Artificial Intelligence

Advocacy @ UNA-NCA
UNA-NCA Snapshots
Published in
7 min readNov 29, 2021

By Pragya Jain, UNA-NCA Advocacy Fellow

Technology dictates the majority of our interpersonal interactions; social media platforms guide our social and political discourse and accessibility to the internet has led to an outpouring of information that inundates everyday people. Beyond the digital world, emerging technology like artificial intelligence (AI) is quickly integrating with day-to-day tools; our phones use facial recognition technology, advertisements on social media are targeted using machine learning algorithms, and voice assistants like Siri are powered by AI. While these technologies have shaped our daily lives by automating everyday tasks and making information much more accessible, they also present a number of ethical and political concerns. However, it is necessary to understand how AI works in order to analyze the potential impact of its unregulated use.

The Fundamentals of AI and Machine Learning

The fundamental concept that powers artificial intelligence is the notion that a computer can learn from new information. Designed in a loosely similar fashion to the human brain, some forms of artificial intelligence rely on constructing an artificial neural network to build connections that replicate the complex level of processing characteristic of humans. Once an algorithm is built for a machine, it will undergo three different forms of learning: supervised learning, self-supervised learning, and reinforced learning, which develops and strengthens its neural network. After the computer understands its intended function, it works independently to self-correct through a series of trials and errors — a process that ultimately can lead to complex decision-making.

The logic behind deep learning and neural networks is simple but allows computers to continuously self-learn and improve predictive capabilities if given enough data. One form of machine learning, often referred to as hierarchical learning, is accomplished using a large set of layered images or data points that the machine pieces together from the bottom up by the machine until it deeply understands a concept. For example, if given input in the form of a matrix of pixels, the computer will begin by classifying individual pixels, then forming clusters of pixels, before finally identifying the full visual. This straightforward process is repeated tens of thousands of times with an equal number of new data sets. It results in a web of nodes that strengthen the predictive decision-making process of the computer.

An Ethics Conundrum: Social Biases, Discrimination, and AI

The rapid growth and inevitable ubiquity of AI raises a wide range of issues, and there are numerous examples of how AI has inadvertently increased biased decision-making in sensitive areas, from criminal justice to healthcare. In the criminal justice system, the use of COMPAS, a machine learning algorithm used to predict the likelihood of defendant recidivism, has been found to incorrectly predict that Black defendants in Broward County, Florida are twice as likely to be a “high-risk” to reoffend than their white counterparts. Despite this research, there are currently 46 states across the U.S. that utilize tools like COMPAS in courtrooms during pretrial detention, with claims that these algorithms improve objectivity in prison sentences. Despite these assertions, however, it is clear that using AI to aid highly sensitive decision-making such as courtroom cases can exacerbate social biases. Programs like COMPAS feed into existing inequitable policing structures and perpetuate the notion that communities of color are more likely to commit a crime, thus leading to a reality where they are incarcerated at disproportionate rates. However, this discrimination in AI goes beyond structural inequities and affects broad cultural norms given how easily technology can disseminate discriminatory information to the general public.

Artificial Intelligence is powered through data, and these tidbits of information are being created at rapid speeds due to the Internet of Things, the phenomenon in which an increasing number of everyday appliances have WiFi capability and can “communicate” with one another through personal data that is collected and stored. However, the data that is collected comes from human interactions and previous human decisions over the various digital devices that are used, making the data collected inherently biased. From smart fridges and home assistants like Alexa to our personal devices, private data is constantly collected, stored, and shared at astonishing speeds. Over the course of 2020, 64.2 zettabytes of data were created, and by 2025, it is projected that 180 zettabytes of data will have been captured. To put into context the speed at which data is being generated and consumed, during each minute of 2020, Facebook users shared 150,000 messages, Instagram business profile ads got clicked 138,889 times, and WhatsApp users sent 41,666,667 messages. Much of the data that is produced is through social media interactions, whether it be a message sent on WhatsApp or a post shared on Instagram, billions of people around the world are active social media users and share private information across an average of 8 social media accounts.

Herein lies a major ethical issue brought up by AI — the use of inherently biased data to power tools like search algorithms, recommendation engines, and adtech networks. Moreover, these tools compile and share user information with U.S. manufacturers, which can potentially lead to security concerns such as the possibility of a data breach which increases the vulnerability of users’ private information, and already marginalized populations are targeted even more through the internet. While this method of data collection is quite useful and accessible for companies building some AI algorithms, the method in which data is sourced is often discriminatory given that people from a higher socioeconomic background have more access to online platforms. Similar to the implementation of COMPAS, social media platforms can exacerbate discriminatory practices towards minority groups by reinforcing biased beliefs, and social media giants like Facebook are now facing condemnation due to their role in sowing division on the internet. This, combined with the fact that those working in the field of AI aren’t typically from diverse backgrounds, leads AI applications to make discriminatory choices and to cater predominantly to a small demographic of people.

Automation and the Future of Work

AI and autonomous machines are replacing workers across all sectors at a rapid rate. The rise of automation in the workplace has already displaced nearly a third of jobs across all sectors, with positions in the service and manufacturing industry suffering the largest losses. Moreover, the World Economic Forum projects that 50% of tasks in the workplace will be performed by AI within the next five years and approximately 86 million jobs will be lost as a result. It should be noted that this report also predicts AI will improve economic efficiency and usher in a new era of worker productivity, creating 97 million jobs across 26 different countries. The issue with this type of job creation is that the benefits and losses from the future of work will not be evenly shared, and the millions of people in the U.S. alone that face job displacement as a consequence of automation may not learn the skills necessary to get rehired. AI will have a striking effect on many service-industry and white-collar jobs that are ubiquitous but rural workers rely on them more since they make up a larger share of the rural economy. These communities that already face high levels of job insecurity will come under additional strain as a result of the automation of certain jobs. According to certain studies, the rise of automation is estimated to displace 132,000 black workers in America. Additionally, the onset of the pandemic has accelerated the automation of the workplace with robots taking over service jobs in airports, malls, and restaurants. On the other hand, automation will complement job roles in high-growth fields like healthcare, where there is no substitute for highly-skilled practitioners, and professionals in these roles will reap large benefits as a result of the transition into partially automated workplaces.

In order to curb the widening wealth gap as a result of automation, reskilling and upskilling the workforce is necessary. However, businesses are investing less in their workers: the percentage of employer-sponsored job training fell by 7 percent between 2003 and 2013 even though the need for high-skilled workers is rising. Additionally, only 17 percent of companies are investing in AI-specific reskilling programs, which puts both employees and employers at a disadvantage when it comes to remaining competitive in the future. To make matters worse, there is a severe digital gap between rural and urban areas: 24% of rural adults experience issues with high-speed internet, whereas only 13% of urban adults say it is an issue. Substantial portions of rural areas still lack the infrastructure required to implement high-speed internet, which makes it exceedingly difficult to bring high-growth jobs to these areas. There is little migration between cities, which suggests little labor market fluidity.

Currently, there is a massive gap between the known consequences of Artificial Intelligence on social biases and job loss versus direct policy action, and much like any other novel technology, direct legislation falls behind the innovation of these technologies. However, it is still imperative that there are direct actions taken to address the social and economic ramifications of AI.

Policy Questions, Suggestions, and Further Questions to Pose

Research and development into AI technologies should be encouraged so that the current “blind-spots” are reduced; a step that will also help protect privacy and reduce discriminatory decisions made by AI. It is essential that workplaces developing AI bring in not only a diverse group of engineers to build these algorithms but also allow for experts from other fields to weigh in on important decisions regarding AI tools built for unique industries. These changes will bring in new perspectives, address the specific flaws that come with AI implementation, and help regulate the rise in unemployment due to automation.

Finally, it is imperative that the U.S. build trust in AI across all demographics in the country by increasing transparency on how AI technology works and ensuring the public that it is being regulated based on a framework ensuring consumer rights. The EU is working on legislation regarding the proper regulation of AI which underscores the importance of trustworthy and well-developed AI tools, and given previous EU technology legislation like the General Data Protection Regulation (GDPR), the world’s strictest set of rules and regulations regarding the handling of user’s personal data by organizations like private tech companies, has affected global consumer privacy standards, it is likely that this white paper on AI will have an important effect on AI regulation in the US.

--

--