The Many Ways Government Already Regulates Artificial Intelligence

Adam Thierer
10 min readJun 2, 2023

--

One of the most bizarre things about the current debate over artificial intelligence (AI) and its governance is the way so many people seem to assume that algorithmic and robotic systems are developing in a policy vacuum, or even a state of near anarchy. This perspective is laughably wrong, as I pointed out in my recent R Street Institute report, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” At the end of that study, I provided a short summary of some of the many existing tools and methods that already exist to address AI and robotic technologies and the potential risks associated with them.

Just as a reminder, government in the United States is absolutely massive. The U.S. federal government alone has over 2.1 million civilian employees working at 15 Cabinet agencies, 50 independent federal commissions, and over 430 federal departments altogether. Therefore, the notion that we have no “state capacity” to address AI issues is preposterous. A huge number of agencies and officials are actively looking into these issues, and many have already acted to address AI and robotic-related issues.

To get a better feel for just some of the ways that government can (or already does) address AI risks, here’s an excerpt from my new R Street Institute study, which appears between pages 33–36 of the report. Please consult the full report for more details and context.

______________

The United States does not have a Federal Computer Commission or Bureau of Consumer Electronics, for example, but when things go wrong, there are many legal remedies available to address problems in those fields. In these and many other industries, innovators are generally free to develop new products. When harms develop, they are addressed in a remedial fashion. In a similar way, existing legal remedies can help address risks associated with algorithmic and robotic systems. Some of these solutions include:

Federal and state consumer protection statutes and agencies: The FTC possesses broad consumer protection powers to police “unfair or deceptive acts or practices in or affecting commerce.”[3] Over the past decade, the agency has used this authority to address many data-security matters and, in 2022, issued a major report highlighting its concerns with various AI risks.[4] Thus, when defective or deceptive algorithmic technologies create substantial harm to consumers, the FTC can intervene.[5] An attorney with the FTC’s Division of Advertising Practices was even more hard-nosed about this in a February 2023 blog post, asserting, “[i]f you think you can get away with baseless claims that your product is AI-enabled, think again […] In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims.”[6] Meanwhile, state Attorneys General and state consumer protection agencies also routinely address unfair practices and continue to advance their own privacy and data security policies, some of which are more stringent than federal law.

Product recall authority: Several regulatory agencies in the United States possess recall authority that allows them to remove products from the market when certain unforeseen problems manifest. For example, the National Highway Traffic Safety Administration (NHTSA), FDA and Consumer Product Safety Commission (CPSC) all possess broad recall authority that can address risks that develop from algorithmic or robotic systems.[7] In February 2023, for example, the NHTSA mandated a recall of Tesla’s full self-driving autonomous driving system, and the agency required an over-the-air software update to over 300,000 vehicles that had the software package.[8] While the NHTSA’s and FDA’s recall authority is more targeted to vehicle and medical technologies, respectively, the CPSC can recall any consumer product that contains a defect if it poses “a substantial risk of injury to the public to warrant such remedial action.”[9] A July 2022 poll commissioned by the CPSC revealed that 80 percent of consumers do everything that a recall notice encourages them to do to address a safety lapse.[10] While encouraging, that result could be further improved using education and awareness efforts. The CPSC has already issued staff reports highlighting how the agency has many policy tools to address emerging technology risks.[11] [Note: I have started a running list of AI-related agency activity here.]

Common law remedies: Various court-enforced common law remedies exist that can address AI risks. These include product liability; negligence; design defects law; failure to warn; breach of warranty; property law and contract law; and other torts.[12] Common law evolves to meet new technological concerns and incentivizes innovators to make their products safer over time to avoid lawsuits and negative publicity.[13] It also evolves to incorporate new social and ethical norms. “[W]hen confronted with new, often complex, questions involving products liability, courts have generally gotten things right,” notes a Brookings Institution scholar. He goes on to explain that “[p]roducts liability law has been highly adaptive to the many new technologies that have emerged in recent decades” and, by extension, it will adapt to other technologies and developments as cases and controversies come before the courts.[14] This also creates powerful incentives for developers to improve the safety and security of their systems and avoid liability, unwanted press attention and lost customers. The question is not whether common law liability will come to cover AI and robotics; it is whether it will impose too great a burden because the United States tends to have a highly litigious legal system.[15]

Property and contract law: Federal and state laws covering contractual rights and property rights can address many perceived harms associated with algorithmic technologies. Property law already governs trespass claims, for example, which will come in handy as drones and other autonomous robotic systems proliferate. Contract law can also help developers live up to the promises they make to the public, including other business customers. Of note, class-action lawsuits will become more common if firms fail to honor their contractual terms.

Insurance and other accident-compensation mechanisms: Many organizations have improved their digital cybersecurity practices “driven by demands from insurance underwriters and a better understanding of the risks of ransomware following high-profile attacks.”[16] The market for highly tailored algorithmic insurance instruments is growing — and not just to address cybersecurity risks.[17] New insurance instruments will likely cover even more broad-based, amorphous algorithmic concerns ranging from physical safety risks to various other risks. Although broad-based algorithmic regulation is unlikely in the short term, lawsuits alleging algorithmic harm are likely going to proliferate in the future. As that occurs, insurance markets are going to continue to evolve and respond, especially for industrial robotics.[18]

Existing statutes and agencies: Many long-standing statutes and agency rules exist that can address concerns about algorithmic bias, privacy or security. Regarding the accusations of potential algorithmic bias and discrimination, the United States has a wide array of broad-based civil rights statutes that apply, including the Civil Rights Act, the Age Discrimination in Employment Act and the Americans with Disabilities Act.[19] Targeted financial laws could address discrimination in the allocation of credit, including the Fair Credit Reporting Act and Equal Credit Opportunity Act. The Fair Housing Act already addresses discrimination for real estate.[20] On the privacy front, laws such as the Health Insurance Portability and Accountability Act, the Gramm-Leach-Bliley Act and the Children’s Online Privacy Protection Act already govern data flows.[21] Moreover, the United States already has a veritable alphabet soup of regulatory agencies that oversee technological developments in various sectors touched by algorithmic and robotic developments. These laws, regulations and agencies can provide a backstop when AI developers fail to live up to any claims they make about safe, effective and fair algorithmic systems.[22] If needed, Congress could always tweak existing laws and regulations should novel or persistent problems develop. Many states also have laws that could apply to algorithmic or robotic systems. For example, “Peeping Tom” laws and antiharassment statutes exist that prohibit spying into homes and other private spaces.[23] Before enacting new laws, policymakers should consider how such existing policies might already cover new technological developments.

endnotes:

[1] John Villasenor, “Soft law as a complement to AI regulation,” Brookings, July 31, 2020. https://www.brookings.edu/research/soft-law-as-a-complement-to-ai-regulation.

[2] Walter G. Johnson, “Governance Tools for the Second Quantum Revolution,” Jurimetrics 59:4 (April 27, 2019), p. 511. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350830.

[3] 15 U.S.C. § 45(a).

[4] “FTC Report Warns About Using Artificial Intelligence to Combat Online Problems,” Federal Trade Commission, June 16, 2022. https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems.

[5] Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality,” Cornell University, June 20, 2022. https://arxiv.org/abs/2206.09511.

[6] Michael Atleson, “Keep your AI claims in check,” Federal Trade Commission, Feb. 27, 2023. www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.

[7] “Recalls, Corrections and Removals (Devices),” U.S. Food & Drug Administration, Sept. 29, 2020. https://www.fda.gov/medical-devices/postmarket-requirements-devices/recalls-corrections-and-removals-devices.

[8] David Shepardson, “Tesla recalls 362,000 U.S. vehicles over Full Self-Driving software,” Reuters, Feb. 16, 2023. https://www.reuters.com/business/autos-transportation/tesla-recalls-362000-us-vehicles-over-full-self-driving-software-2023-02-16.

[9] United States Consumer Product Safety Commission, Recall Handbook (March 2012), pp. 2, 12.

[10] “Qualtrics Final Report on Consumer Attitudes and Behaviors Regarding Product Safety,” United States Consumer Product Safety Commission, July 26, 2022. https://www.cpsc.gov/content/Qualtrics-Final-Report-on-Consumer-Attitudes-and-Behaviors-Regarding-Product-Safety.

[11] “Artificial Intelligence and Machine Learning In Consumer Products,” United States Consumer Product Safety Commission, May 19, 2021. https://www.cpsc.gov/About-CPSC/artificial-intelligence-and-machine-learning-in-consumer-products; “Potential Hazards Associated with Emerging and Future Technologies,” United States Consumer Product Safety Commission, Jan. 18, 2017. https://www.cpsc.gov/content/Potential-Hazards-Associated-with-Emerging-and-Future-Technologies.

[12] “Torts of the Future II: Addressing the Liability and Regulatory Implications of Emerging Technologies,” U.S. Chamber Institute for Legal Reform, April 2018. https://instituteforlegalreform.com/wp-content/uploads/2020/10/tortsofthefuturepaperweb.pdf; Richard A. Epstein, “Liability Rules in the Internet of Things: Why Traditional Legal Relations Encourage Modern Technological Innovation,” Hoover Institution, Jan. 8, 2019. https://www.hoover.org/research/liability-rules-internet-things-why-traditional-legal-relations-encourage-modern.

[13] Donald G. Gifford, “Technological Triggers to Tort Revolutions: Steam Locomotives, Autonomous Vehicles, and Accident Compensation,” Journal of Tort Law 11:1 (Sept. 5, 2018), pp. 71–143. https://doi.org/10.1515/jtl-2017-0029.

[14] John Villasenor, “Who is at fault when a driverless car gets in an accident?,” UCLA Newsroom, May 2, 2014. https://newsroom.ucla.edu/stories/who-is-at-fault-when-a-driverless-car-gets-in-an-accident.

[15] Adam Thierer, “When the Trial Lawyers Come for the Robot Cars,” Slate, June 10, 2016. https://slate.com/technology/2016/06/if-a-driverless-car-crashes-who-is-liable.html.

[16] Robert McMillan et al., “Hackers Extort Less Money, Are Laid Off as New Tactics Thwart More Ransomware Attacks,” The Wall Street Journal, Feb. 22, 2023. https://www.wsj.com/articles/ransomware-attacks-decline-as-new-defenses-countermeasures-thwart-hackers-23b918a3.

[17] Jeff Qiu, “Improving U.S. Cybersecurity by Solving Issues in the Cyber Insurance Market Part One: Current State and Challenges,” R Street Institute, Aug. 8, 2022. https://www.rstreet.org/commentary/improving-u-s-cybersecurity-by-solving-issues-in-the-cyber-insurance-market-part-one-current-state-and-challenges; Josephine Wolff, “A Brief History of Cyberinsurance,” Slate, Aug. 30, 2022. https://slate.com/technology/2022/08/cyberinsurance-history-regulation.html.

[18] Andrea Bertolini et al., “On Robots and Insurance,” International Journal of Social Robotics 8 (March 3, 2016), pp. 381–391. https://link.springer.com/article/10.1007/s12369-016-0345-z.

[19] “Civil Rights Act (1964),” National Archives, last accessed March 3, 2023. https://www.archives.gov/milestone-documents/civil-rights-act; Keith E. Sonderling et al., “The Promise and The Peril: Artificial Intelligence and Employment Discrimination,” University of Miami Law Review 77:1 (2022), p. 6. https://repository.law.miami.edu/umlr/vol77/iss1/3; “The Americans with Disabilities Act (ADA) protects people with disabilities from discrimination,” U.S. Department of Justice, last accessed March 3, 2023. https://www.ada.gov.

[20] “The Fair Housing Act,” U.S. Department of Justice, last accessed March 3, 2023. https://www.justice.gov/crt/fair-housing-act-1.

[21] “Health Insurance Portability and Accountability Act of 1996 (HIPAA),” Centers for Disease Control and Prevention, last accessed March 3, 2023. https://www.cdc.gov/phlp/publications/topic/hipaa.html; “Gramm-Leach-Bliley Act,” Federal Trade Commission, last accessed March 3, 2023. https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act; “Children’s Online Privacy Protection Rule (“COPPA”),” Federal Trade Commission, last accessed March 3, 2023. https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa.

[22] Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic Accountability,” Center for Data Innovation, May 21, 2018. https://datainnovation.org/2018/05/how-policymakers-can-foster-algorithmic-accountability.

[23] See, e.g., Va. Code Ann. § 18.2–130 Peeping or spying into dwelling or enclosure.

_________________

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer