AI Bill of Rights: Taking Accountability or Stifling Progress?

By Melis Jensen

Introduction

On Tuesday, October 4, 2022, President Joe Biden unveiled a new guideline, or ‘Bill of Rights’, designed to regulate and supervise artificial intelligence (AI) technology in the United States. As technological development has improved exponentially over the past few decades, so has the emergence and usage of artificial intelligence systems. Before discussing its effect on the global community and consumer market, it’s important to define what artificial intelligence actually is. Artificial Intelligence is defined as the ability of a computer program to learn and think without relying on human intelligence. Very quickly, we have seen AI systems take the place of humans in jobs involving operation systems and customer service. Automated and AI technologies have proven to be extremely beneficial both cost-wise and in efficiency for a myriad of reasons. The use of AI technology has greatly diminished the margin of human error in data collection, substantially reduced wait times for customer service support, and allowed companies to reach a far wider audience. Despite the clear benefits, when left completely unrestrained, AI can have detrimental effects on society. In response to the possible transgressions by big tech companies, the Biden Administration has released its new set of guidelines to act as “bumpers” for tech companies and ensure fair use of AI technology.

This article will explore not only the various components of the new AI Bill of Rights, but also highlight the global conversation surrounding the new guidelines.

The Basics

In the initial summary of the AI Bill of Rights, the White House clarified that the intent of the blueprint is to “help guide the design, use, and deployment of automated systems to protect the American public.” As the administration notes, many algorithmic systems and automated data collection services lead to inequity and biases towards marginalized populations. The lack of human consciousness to control data collection and customer service as well as undetected human biases encoded into the algorithms can further exacerbate the class divide in this country. To protect against such injustices in the world of AI, Biden has introduced 5 primary categories of protection and regulation within the Bill of Rights:

U.S. President Joe Biden speaks to reporters as he departs for Puerto Rico from the White House in Washington, U.S., October 3, 2022. REUTERS/Kevin Lamarque

1: Safe and Effective Systems: A user should be protected from unsafe or ineffective systems.

All systems available to the public should go through testing, risk identification, and mitigation, along with ongoing monitoring. In addition, automated systems should be created in the image of diverse communities and with experts that identify potential impacts of the system.

2: Algorithmic Discrimination Protections: Users should not face discrimination from algorithm technology, and systems should be designed and utilized in an equitable way.

AI technology and systems cannot contribute to unjustified differences in treatment of people based on race, color, ethnicity, sexuality, gender, or any other classification protected by law. Developers and designers must proactively ensure that communities and individuals are protected from algorithmic discrimination.

3: Data Privacy: A user should be protected from abusive data practices via built-in protections and you should have agency over how personal data is used.

All collections of data from a user or private server should not only ask for consent but should also only intake data that is necessary for the specific context. Developers must use caution in using data collection in a way that does not breach customer privacy.

4: Notice and Explanation: Users should know when an automated system is being used and understand how and why it contributes to outcomes that impact them.

It is the responsibility of the developers to provide the general public with clear, concise documentation on the artificial intelligence being used. It should always be clear when AI systems are being used and how such uses may impact the public. In addition, it’s specifically important

5: Human Alternatives, Consideration, and Fallback: You should be able to opt-out of AI tech, where appropriate, and have access to a human professional who can quickly consider and remedy problems you encounter.

An option to opt-out of artificial intelligence communication or automation should be made available to users. Especially in situations of system failure, human consideration and assistance to remedy such technological errors are required.

Alondra Nelson speaks during an event at The Queen theater, Jan. 16, 2021, in Wilmington, Del. “We can and should expect better and demand better from our technologies,” said Nelson, Deputy Director for Science and Society at the Office of White House Science and Technology Policy. (AP Photo/Matt Slocum, File)

Experts and Political Leaders Weigh In

Because this AI Bill of Rights is the first major step taken by the US to police artificial intelligence technology, the new bill has become a topic of conversation amongst everyday civilians, tech moguls, and world leaders. The discourse has been a back-and-forth between individuals, with some expressing their praise for the new guidelines and others vehemently dissenting against the bill of rights.

Pros:

Some have highlighted the document’s sheer length and detail as its most praiseworthy feature. The AI guidelines run 75 pages in length, with entire pages devoted to laying the groundwork for definitions for technical terms. The White House makes it clear that laying an educational foundation that is accessible to the general public is of the utmost importance. President Marc Rotenburg of the Center of AI and Digital Policy, a nonprofit that tracks AI policy, finds the Bill of Rights to be an impressive feat. He praises, “This is a very good starting point to move the US to a place where it can carry forward on [its] commitment.” President Matt Schruers of the tech lobby CCIA expresses his appreciation of the administration’s “direction that government agencies should lead by example in developing AI ethics principles, avoiding discrimination, and developing a risk management framework for government technologists.”

Cons:

Despite the praise of the new AI guidelines, many have also voiced many criticisms. For example, some believe that offering regulations on AI technology would unduly stifle innovation and improvements in the technology space. Without a proper cause to do so, it may be too early to restrict a field that is growing so rapidly and providing clear benefits to society. The former chief executive at Google, Eric Schmidt says, “I would not regulate until we have to. There are too many things that early regulation may prevent from being discovered.” On the other hand, some have voiced the opposite concern about the nonbinding nature of the AI Bill of Rights. As the document itself is only a guideline offered for educational purposes and a precaution, it lacks any legal authority to punish those who neglect the guidelines. For example, if the U.S. government were to receive intel that a certain private company was using AI technology in a way that does not fit in the new Bill of Rights, the document provides no legal recourse for the government to reprimand the offending company.

AI ethics researcher Timnit Gebru — a well-respected pioneer in her field and one of the few Black women leaders in the industry — said on December 2 that Google fired her after blocking the publication of her research around bias in AI systems. Kimberly White/Getty Images for TechCrunch.

In addition to being a nonbinding document, the AI Bill of Rights has been criticized as being outdated when compared on a global scale. For example, the European Union is considered to be steps ahead of the US efforts in the automated technology space. The EU General Data Protection Regulation was enacted four years ago and enforces steep fines on companies that fail to comply with the existing guidelines limiting personal data gathering. The Data Protection Regulation has forced major technological giants to strengthen compliance and user data collection efforts. In fact, fines for AI mistreatment can take up to 6% of companies’ earnings. Similarly, China has introduced its own AI legislation to hold businesses accountable. In March 2022, China introduced more stringent regulations on algorithms and AI systems. Primarily, this legislation enforces that technology must act ethically and ethically, while also providing options for users to opt-out of automated systems.

Next Steps

When looking forward to automated systems regulation, the White House has hinted towards possible new policies regarding the creation of AI technology. An example of such legislation is Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. These executive orders would be binding and could enforce legal consequences to those who use AI technology outside of the legislative rules.

My View

While I believe that the new AI Bill of Rights is an important foundation for the formation of proper technological regulation, I do agree that it is long overdue. As a leader in artificial intelligence production and the headquarters for many tech giants, the US should have led the movement on AI regulation. The new AI guidelines are almost so untimely that they are already outdated given the rapidly developing AI tech space. I feel strongly that quicker actions must be taken at the federal level to regulate tech companies in order to avoid unrestricted domains. In addition, the lack of binding legislation holding the tech industry responsible for violations of the new AI guidelines will quickly prove to be disastrous. Without reprimand or consequences for violations by big, private corporations, there is little that can truly be done to supervise artificial intelligence. Without wanting to undermine the details and educational value of the new AI Bill of Rights, I do believe that local and federal governments must act faster and with more gumption to ensure that tech companies do not take advantage of the fact that AI and automated systems are relatively uncharted territory.

Sources

Melis Jensen is a Mechanical Engineering student at SEAS. She joined JSTEP because she thinks the intersection of STEM with policy and ethics is a fairly overlooked field/topic of conversation. With her experience of working with political organizations in the past, she thinks a major thing lacking within these organizations’ teams is a focus on how engineering and science plays a role in political decision making. Often advances in medicine/science or a new engineering feat can have negative effects on already marginalized populations, and she feel like the ethics of the STEM community should be an ongoing, global conversation.

--

--

Columbia JSTEP
Columbia Journal of Science, Tech, Ethics, and Policy

Providing a space for interdisciplinary collaboration in writing, research, and creative solution-building to complex issues of the present and future.