Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges

Adam Thierer
43 min readDec 2, 2022

--

[Version 2.4 — June 2023]

Many books and reports have been written about the nature and history of artificial intelligence and machine learning technologies.[1] This brief survey offers some basic definitions and highlights the promise of these technologies, as well as some of the unique policy challenges they create.

Definitional Difficulties Complicate AI Governance

Defining the nature and scope of artificial intelligence is notoriously tricky and this is the first of many factors complicating its governance. The Stanford Encyclopedia of Philosophy speaks of the “remarkably difficult, maybe even eternally unanswerable” questions involved in formulating a consensus definition for AI.[2] “There is no single universally accepted definition of AI, but rather differing definitions and taxonomies,” a U.S. Government Accountability Office report concludes.[3]

At the most basic level, however, artificial intelligence involves the exhibition of intelligence by a machine. Machine learning refers to the processes by which a computer can train and improve an algorithm or computer model without step-by-step human involvement. An algorithm is “similar to a recipe for a dish,” notes computer scientist Ethem Alpaydin, in that it is “a sequence of instructions that are carried out to transform the input to the output.”[4] More simply, an algorithm is a “set of instructions that describe the way to solve particular problems.”[5] When people speak of regulating AI or ML, at root, they are really suggesting the need to control algorithms and algorithmic processes because they are at the heart of all machine learning. Moreover, because AI and ML are computational sciences, to regulate them means at some level we are regulating computing and mathematical modeling techniques. These realities also complicate AI governance.

The effectiveness of most AI/ML tools depends upon enormous computing power (or compute for short), large data sets (so-called big data), and powerful computational analysis tools that power deep learning models and other AI learning methods.[6] These building blocks of AI, especially big data, raise policy issues in their own right, especially on privacy and data security grounds. Indeed, many of today’s AI governance discussions are simply extensions of policy debates that have been going on for many years in big data circles.[7]

Finally, so-called foundational models, which are “models trained on broad data… that can be adapted to a wide range of downstream tasks,”[8] are capturing great attention today because they are akin to a digital Swiss Army knife can be widely used to accomplish a number of different tasks.[9] Popular foundational models or generative AI systems like DALL-E, GPT-4, and LaMDA let users create AI-powered art, scripts, and chatbot conversations. Generative systems are algorithms, “that can be used to create new content, including audio, code, images, text, simulations, and videos.”[10] Large language models (LLMs), like OpenAI’s ChatGPT, are one type of generative system built on top of a foundational model, in this case GPT-4. Foundational models hold the potential to help democratize the use of AI, but in the process give rise to various new risks of misuse — misinformation, deception, copying, etc. — which also makes AI governance more complicated.[11]

“Strong” vs. “Weak” AI

There are both weak and strong forms of AI, but even these terms are contested and can be confusing.

Weak AI is a bit of a misnomer because weak AI can be quite powerful, but it is just narrower in its application. Weak AI applications can often excel in doing one specific task extraordinarily well, such as paying games, offering language translation, or even operating certain vehicles or machines without much human assistance. Many people use AI-enable technologies every day without ever thinking of it as artificial intelligence. A voice-activated digital assistant like Apple’s “Siri” or Amazon’s “Alexa” are the equivalent of AI-enabled co-pilots in our lives, as are digital mapping and navigation tools that we rely on to more easily find our destinations.

Strong AI typically refers to broad-based machine capabilities and is sometimes also called artificial general intelligence, (AGI) reflecting near-human levels of comprehension or ability. AGI, which is also sometimes more ominously referred to as superintelligence,[12] tends to capture considerable public attention because it “conjures up a vast array of doom-laden scenarios.”[13] AGI also figures prominently in the plots of many dystopian depictions of artificial intelligence found in popular culture, including many science-fiction books, movies, and television shows.[14]

This has often led to over-hyping of AI’s potential to attain human-like capabilities,[15] with sensationalism and speculation often dominating discussions.[16] It doesn’t help that both supporters and critics of powerful AGI sometimes play up predictions of AI superintelligence and speak in fatalistic terms about the coming of a “singularity,” or moment in the future when machine intelligence surpasses that of humans. For example, flamboyantly titled books by AI boosters like Ray Kurzweil (The Singularity Is Near) and detractors like Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) reflect an air of inevitability about machines coming to possess greater intelligence than humans, for better or worse.

However, the majority of AI experts agree that such superintelligence predictions are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon — or perhaps ever.[17] “In any ranking of near-term worries about AI, superintelligence should be far down the list,” says AI expert Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.[18] “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.[19] Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality.[20] Whether it is the proponents of idea of “apocalyptic or fearsome AI” or “utopian or dreamy AI,” both are guilty of oversimplifying complicated ideas, he says.[21] Andrew Ng, a leading AI scientist, has humorously observed that, “Worrying about killer AI is like worrying about overpopulation on Mars.”[22]

Extreme speculation about superintelligent AI represents an underappreciation of the complexity of actual human intelligence and our unique ability to navigate so many unique situations.[23] As an important report from Stanford University noted, most AI experts are still struggling to figure out how “to imbue machines with common sense abilities” and find, “methods [that] can scale to the more diverse and complex problems the real world has to offer.”[24]

But fears of superintelligent AI capture grab headlines and capture public attention, raising fears about the existential risks they might pose to civilization.[25] A June 2023 Time magazine special report on AI issues featured a cover with the headline fearing “The End of Humanity.” A few months prior to that, Time ran an essay by one notable AI critic who called for airstrikes on supercomputing datacenters and said that governments should be “willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.” [26]

Such fears have generated panicky headlines in the past, too. When the public saw sophisticated AI programs defeat the world’s best players of games like chess and Go, it raised fears about how machines had already come to possess human-level intelligence. For example, when IBM’s “Deep Blue” famously defeated chess grandmaster Garry Kasparov in 1997, a headline in Newsweek declared it to be “The Brain’s Last Stand,”[27] and many other media reports engaged in dystopian handwringing about the triumph of machines over humanity. Similar fears were raised when DeepMind’s AlphaGo beat Go champion Lee Sedol in 2016.[28] Yet, neither of these programs possessed the general capacity to do much beyond what they were trained to do.[29] They could not, for example, teach themselves how to master many other games, including simple ones like checkers or poker. This is the way almost all AI systems work today: they are good (and getting better) at one task, but incapable of high-level human-like reasoning across many simple activities. In this sense, they are narrow, not broad, AI applications.

Ironically, 20 years after losing his famous match with Deep Blue, Kasparov authored a book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, seeking to debunk dystopian thinking about machine-learning and AI. He noted that, “doomsaying has always been a popular pastime when it comes to new technology” and that, “[w]ith every new encroachment of machines, the voices of panic and doubt are heard, and they are only getting louder today.”[30]

AI’s Multiple Dimensions Also Complicate Its Governance

Debates continue to rage over both how to conceptualize of AI as well as how to advance algorithmic capabilities. One notable 2014 study spoke of the need to embrace AI’s “anarchy of methods” when it comes to teaching machines how to think because there were so many subfields, techniques, and abstractions of concepts.[31] Thomas Edison once spoke of how electricity was a “field of fields” that would transform life in many ways. The same is true of AI, and this is another factor complicating its governance.

AI/ML builds upon knowledge and capabilities developed through many other important technologies and sectors, including computing, microprocessors, the internet, high-speed broadband networks, data storage/processing systems, GPS and geolocation, sensors, and others. Hal Varian, chief economist at Google, observes that we live in an era of rapid-fire combinatorial innovation in which new technologies are building on top of one another in a symbiotic fashion, further accelerating their development and sophistication.[32] This is precisely what powers AI/ML. Many other scientific fields of study are closely related to AI/ML. “Machine learning is at the intersection of statistics and computer science, occasionally also taking inspiration from cognitive science and neuroscience,” says Alpaydin.[33] These factors also complicate AI governance because attempts to regulate AI/ML could have profound implications for many other technologies, sectors, and fields of science. Thus, when one blithely suggests “we should take steps to control AI,” they are (perhaps unknowingly) recommending that we should take steps to control or influence many other things alongside it.

By extension, AI/ML is set to become the “most important general-purpose technology of our era.”[34] General-purpose technologies come to be intertwined with almost every other sector of the economy and used ubiquitous throughout society.[35] For example, AI will be used by almost all organizations to help improve analytics and marketing, enhance customer service, and boost sales or performance in various new ways. And it will completely upend the way production and work is done in countless fields and professions.

This is what makes AI so important for future innovation and growth, but this fact also complicates its governance.[36] Much like electronics, computing, and the internet before it, it is easier to imagine how to govern individual components or outputs than the broader concept itself. This is one reason that governance frameworks for things like driverless cars, drones, and robotics are developing more rapidly than overarching regulation for general AI.

Finally, AI also raises special governance challenges because it is a dual-use (and often open source) technology that, like chemical and nuclear technologies before it, has both beneficial peaceful uses, but also potentially concerning military or law enforcement applications.[37] This fact is particularly important when discussing so-called existential risk, which I address elsewhere.[38] Meanwhile, many current regulatory discussions focus on “affective computing” and biometric technologies, such as facial recognition, which are dual-use technologies that monitor human attributes and, when used improperly, raise serious security and privacy risks.[39]

AI Promises to Drive Growth in Many Sectors

Over the past half century, there have been waves of both hype and hysteria about the prospects for AI advancement.[40] Much of this was driven by both irrational exuberance and fear about high-level AGI that never came about. As a result, AI historians often speak of the many AI “springs” and “winters” that have come and gone over the past half century. Others describe it as AI “booms and busts.”

It did not help that some of AI’s early pioneers over-exuberantly predicted powerful AGI would be with us very quickly. In the late 1960s, for example, noted AI researchers confidentially predicted that, “machines will be capable, within twenty years, of doing any work a man can do,” (Herbert A. Simon), and that, “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved” (Marvin Minsky). Such exuberance was replaced by pessimism in the 1970s and a resulting “winter” period for AI research and investment.

Today, however, AI is generally thought to be in the midst of another “spring” period as enthusiasm grows around specific capabilities and applications. The power of AI/ML technologies is already all around us in products and services such as speech and image recognition tools on our smartphones as well as the recommender systems that many media providers and other companies use to tailor goods, services, and content to our interests.

Other times, AI/ML is operating behind the scenes to help with fraud and spam detection, computer virus filtering, content management/moderation,[41] mapping/navigation,[42] travel planning,[43] weather forecasting and natural disaster prediction,[44] warehouse automation/inventory management,[45] supply chain management,[46] and various other office logistics.[47] For example, in 2021, McKinsey & Company estimated that “[s]uccessfully implementing AI-enabled supply-chain management has enabled early adopters to improve logistics costs by 15 percent, inventory levels by 35 percent, and service levels by 65 percent, compared with slower-moving competitors.”[48] These productivity enhancements are likely to accelerate as AI/ML techniques are further refined.

AI and ML capabilities also power most of devices that make up the so-called Internet of Things, or various connected “smart” devices, including many wearable technologies and other devices with embedded sensors.[49] Another related term here is ambient computing[50] or ubiquitous computing, which essentially means “using computers without knowing that you are using one,” or at least without explicitly calling them computers when one is using smart systems.[51] These technologies have powerful health and medical applications, among other things.

Meanwhile, various AI-powered robotic technologies are already at work in many industrial sectors.[52] AI, ML, and advanced robotics technologies promise to revolutionize many fields, including financial services,[53] transportation,[54] retail,[55] agriculture,[56] entertainment,[57] energy and climate solutions,[58] endangered species protection,[59] education,[60] aviation,[61] the automotive industry,[62] and many others.[63] Going forward, every segment of the economy will be touched by AI and robotics in some fashion, and it should be equally clear that public policy in these fields will be transformed in the process. Eventually, all policy will involve AI policy at some level.

The potential exists for AI to drive explosive economic growth.[64] According to Grand View Research, a market research and consulting company based in India and the US, the global artificial intelligence market was valued at USD 93.5 billion in 2021 and is projected to expand at a compound annual growth rate of 38.1% from 2022 to 2030.[65] Many other studies forecast that “AI will have a significant economic impact” on growth and productivity.[66] A 2018 study by McKinsey consultants estimated that “AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16 percent higher cumulative GDP compared with today. This amounts to 1.2 percent additional GDP growth per year,” the report concluded.[67] Even if AI’s economic impact falls far short of that estimate, it would still clearly be generating enormous growth opportunities across many segments of the economy.

But it is what AI will mean for every individual that matters most. AI has to ability to help people improve their health, extend their lives, expand transportation options, avoid accidents, improve community safety, enhance educational opportunities, access superior financial services, and much more. AI-driven machines and robot will assist with many dangerous jobs, thus making many workplaces safer.[68]

Case Study: AI’s Potential for Medicine and Health Care

Consider what AI is already accomplishing in the field of health care and the practice of medicine.[69] Increasingly powerful algorithmic systems — often combined with new wearable technologies — are already helping many people better monitor their health and fitness. More sophisticated AI tools are allowing doctors and scientists to create highly personalized care options and develop new medical treatments tailored to the unique needs of each patient.[70] As two medical experts and the authors of a new book on The Age of Scientific Wellness have noted:

those who fold these systems into their practices will be doing their patients (and themselves) a great service. At their best, they are akin to having not one expert but thousands upon thousands, all working together at top speed. Because AI is generally inexpensive to run once it has been developed, the potential for optimizing care and making it radically cheaper is striking.[71]

AI technologies are already having a profound impact on public health. In 2022, for example, an AI technology called AlphaFold from Deep Mind was able to model the structure of nearly all known proteins, which represented “a significant advance in biology that will accelerate drug discovery and help address problems such as sustainability and food insecurity.”[72] Researchers from Facebook AI Research at Meta AI have a competing ML -created database of 617 million predicted protein structures.[73]

AI, ML, and robotics are driving many other major medical advances today, and are becoming a crucial part of early detection of various ailments and diseases.[74] “Artificial-intelligence algorithms are processing vast troves of data in electronic medical records, searching for patterns to predict future outcomes and recommend treatments,” notes a Wall Street Journal medical reporter.[75] “They are creating early-warning systems to help hospital staff spot subtle but serious changes in a patient’s condition that aren’t always visible or noticed in a busy unit, and predicting which patients about to be discharged from the hospital are at highest risk of being readmitted.”[76]

Here are some other concrete examples of how AI, ML, robotics, and algorithmic systems are already helping to improve health outcomes:

· Organ donation: In the field of organ donations, “[p]aired kidney donation is one of the great success stories of artificial intelligence,” helping doctors and patients by taking “an incredibly complex problem and solves it faster and with fewer errors than humans can, and as a result saves more lives.”[77]

· Heart attack detection & treatment: AI and ML tools are helping detect and treat heart disease and heart attacks, which is a leading cause of death globally.[78] Scientists at Cedars-Sinai developed an algorithmic tool that can quantify coronary plaque buildup in five to six seconds compared to at least 25 to 30 minutes before.[79] This will greatly improve ability to predict who will have a heart attack. Other researchers have developed AI tools to help improve personalized treatment for women who have had heart attacks.[80] Women who suffer a heart attack have a higher mortality rate than men, often because their symptoms are not properly understood or diagnosed. Meanwhile, the British National Health Service recently started using a new AI tool that can detect heart disease in just 20 seconds while patients are in an MRI scanner, compared with the 13 minutes, or more it usually takes doctors to manually analyze images after a scan is performed.[81]

· Cancers: Cancer is the second leading cause of death behind heart disease, claiming 602,350 lives in 2020.[82] AI and ML-enabled technologies are poised to help reduce that staggering death toll. Mayo Clinic researchers have shown how ML models can help diagnose and treat pancreatic cancer at an earlier stage.[83] Pancreatic cancer is the third lead cause of cancer death, claiming 46,774 lives in 2020.[84] British scientists have also recently reported on new AI software that can spot signs of pre-cancer during endoscopies in 92 per cent of patient, which could significant lower deaths from oesophageal cancer.[85] AI/ML techniques are also helping with early detection and treatment of lung cancer,[86] breast cancer,[87] brain cancer,[88] cervical cancer,[89] and many other types of cancer[90] (including undiagnosable cancers[91]), aided by increasingly personalized screening techniques.[92]

· Sepsis & superbugs: Recent medical studies have also documented how AI-powered monitoring systems are helping to detect antibiotic-resistant “superbugs”[93] and sepsis,[94] and will save thousands of lives each year as a result. Roughly 1.7 million adults develop sepsis every year in the U.S. and more than 250,000 of them die.[95] The use of AI “dramatically cuts the time it takes to sort through thousands of promising compounds,” to fight drug-resistant pathogens, researchers find.[96]

· Paralysis: The Christopher & Dana Reeve Foundation has estimated that there are nearly 1 in 50 people living with paralysis in the United States.[97] The combination of artificial intelligence and robotic technologies hold out the hope of helping paralyzed individuals regain certain motor functions.[98] In May 2023, a man who had been paralyzed from the waist down for more than a decade regained his ability to walk thanks to brain and spine implants and an AI-enabled thought decoder that helped him translate electrical brain signals into to muscle movement.[99] He is now able to walk around his own home and in get in and out of a car on his own. AI technologies are also helping to improve disabilities access capabilities in other ways.[100]

· Mental health & drug addiction: AI can help identify and address mental health problems through textual analysis, which can supplement human-based analysis at a time when there is a nationwide shortage of health care workers in this area.[101] AI tools are also being tapped to help find novel drugs that can help counter opioid addiction, which has become a chronic problem in recent years.[102]

There are many other current or potential health-related applications for algorithmic technologies, including abnormal chest X-ray detection,[103] AI-powered ultrasounds,[104] and new drug and vaccine discovery.[105] AI and ML will power other advanced learning capabilities that will help doctors and scientific researcher access and understand massive amounts of patient and health data — and then put it to even better use. These same capabilities will help innovators create new personalized health monitoring and tracking systems for the public.[106]

In 2022, I served as a member of the U.S. Chamber of Commerce “AI Commission on Competition, Inclusion, and Innovation,” a group formed to study AI governance. At a Spring 2022 field hearing, our Commission heard remarks from Dr. Tom Mihaljevic, MD, CEO and President at the Cleveland Clinic, as well as several of his colleagues.[107] These Cleveland Clinic doctors and scientists highlighted how they were already using AI/ML to improve patient care and save lives. They noted how teams of doctors and researchers are now able to share information from tissue samples with much larger teams of medical experts, who can — with the help of algorithmic systems — work together at a distance to better understand and use all the information they will have at their fingertips. They have also developed better AI-driven methods to detect irregular heartbeats and strokes, and diagnose degenerative brain disease (Alzheimer’s, dementia, Parkinson’s), as have other medical centers.[108]

This only scratches the surface of what AI/ML will mean for patient care.[109] Dr. Mihaljevic noted that, when he started practicing medicine in the 1980s, the overall volume of medical information doubled roughly every seven years while today it is doubling every 73 days.[110] Marcus and Davis note that seven thousand medical papers are now published every day.[111] Meanwhile, in the closely related field of medical robotics, the number of scientific papers has grown exponentially from less than 10 published in 1990 to more than 5200 in 2020 according to a recent study in Science.[112] These numbers are in line with broader trends in technical and scientific literature. “Since the scientific literature doubles roughly every 12 years, this means that of all scientific work ever produced, half of it has been produced in the last 12 years,” note the authors of The Science of Science.[113]

The only way to take full advantage of this explosion of knowledge is with the power of machine reading and learning technologies. As the National Cancer Institute summarizes, “what scientists are most excited about is the potential for AI to go beyond what humans can currently do themselves. AI can ‘see’ things that we humans can’t, and can find complex patterns and relationships between very different kinds of data.”[114] The authors of The Age of Scientific Wellness speak of the rise of “’centaur doctors,’ combining the best parts of human intelligence and AI assistance, will be empowered to make bold medical decisions with far fewer unintended consequences.”[115] Meanwhile, AI assistants can help address the significant paperwork and filing burdens that doctors and nurses face today, which will help free up time for dealing with patients and research.[116]

In the process, AI/ML will also help share medical knowledge across far more institutions and reach more patients as a result. Dr. Mihaljevic estimated that the Cleveland Clinic, which is one of the most important medical research facilities in the nation, is only able to reach an estimated 1.5% of Americans using its traditional means of care. Machine learning and artificial intelligence can change that equation by greatly expanding opportunities for Americans to access the benefits of scientific knowledge and medical care from the Cleveland Clinic and America’s many other world-class medical facilities, labs, and universities. Dr. Mihaljevic specifically highlighted how AI was the key to improving home-based medical care, which will become an essential way to help a rapidly aging population in the future, regardless of where they live.[117] AI will also become crucial for various surgeries in terms of both improving outcomes when operations are necessary (often through robotic assisted surgery)[118] or, better yet, avoiding the need for invasive procedures altogether.[119] Robotic surgery at a distance is now also becoming possible thanks to recent advances.[120]

For these reasons, policymakers should not underestimate the importance of AI/ML technology, and they must work diligently to ensure that America remains a leader in this field. While some experts predict another AI winter could be coming following some notable narrow AI disappointments, they oftentimes fail to identify how public policy influences that outcome.[121] The overall amount of innovation we can expect to flow from this space is fundamentally tied up with the question of whether America creates the right innovation culture for artificial intelligence.[122] To achieve its full potential and bring about the “AI revolution in medicine” that some predict, America will need to set its policy defaults in such a way to both encourage innovation while also addressing the many legitimate concerns about various AI capabilities.[123]

The Current AI Policy Landscape

Artificial intelligence is increasingly the focus of heated social and political debates, and it is poised to become an all-encompassing policy concern in the coming months and years.[124] As computational technologies come to affect every facet of our lives, legislative and regulatory interest in algorithmic systems will grow rapidly.

Many academics and policymakers are seeking ways to achieve “AI alignment” — that is, to make sure that algorithmic systems promote human values and well-being.[125] The process of embedding and aligning ethics in AI design is not static, however. Alignment it is an ongoing, iterative process influenced by many factors and values. As will be noted, these different values and policy priorities can come into conflict. Meanwhile, AI policy includes many distinct applications and subsectors, each with its own nuances. Several of the top algorithmic policy issues currently being debated are identified here.

Two Types of Potential AI Regulation: Broad-based or Targeted

Before outlining some of those major AI policy concerns, it is worth highlighting how algorithmic regulation could take two forms: broad-based or targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. For example, Congress considered an Algorithmic Accountability Act in 2022 that would have imposed restrictions on any larger company that “deploys any augmented critical decision process.”[126] The act would require developers to file “algorithmic impact assessments” with a new Bureau of Technology within the Federal Trade Commission (FTC). By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. An example of this includes bills dealing with autonomous vehicle policy, which were considered in the last several sessions of Congress but never passed. There have been other proposals to license AI systems as well, potentially through a new “FDA for algorithms.”[127]

It is possible that both types of AI regulation will advance, but targeted policy efforts likely have a greater chance of passing, at least in the short term. Broad-based measures face more challenges, including a very slow and often somewhat dysfunctional legislative process, especially for fast-moving sectors.[128] At this time, however, neither broad nor targeted federal laws have advanced. Instead, AI governance is mostly taking the form of various “soft law” initiatives, which include various informal, iterative and collaborative solutions to governance issues.[129] Notable soft law mechanisms include multi-stakeholder processes; “sandboxes” or experimental test-beds; industry best practices or codes of conduct; technical standards; agency workshops and guidance documents; and education and awareness-building efforts.[130] The courts and common law solutions also supplement these informal mechanisms.

More formal algorithmic regulations may be coming to the United States, and they have already arrived across the Atlantic. The European Union (EU) has implemented a wide variety of data collection mandates that have restricted innovation and competition across the continent.[131] These regulatory burdens have left the EU with few homegrown information technology firms. The EU is also pushing a new AI Act that would comprehensively regulate algorithms, adding still more red tape.

Here in the United States, many states — led by California — are advancing a variety of tech regulations and algorithmic “fairness” regulations.[132] America’s AI innovators thus run the risk of being squeezed between costly and conflicting mandates driven by a “Brussels Effect” (EU efforts to export regulation extraterritorially) and a “California Effect” (state-by-state tech algorithmic rules, many of which will originate in California). This problem may accelerate federal interest in legislating on this front to counter or compliment those regulations. As measures advance, some of the issues or concerns discussed here will likely drive them.

Seven AI Policy Fault Lines

1) Privacy and Data Collection

Perhaps the most important AI policy fault line is also one of the oldest issues in the field of information policy: data collection practices and privacy considerations. Concerns about how data collection might be used by private or government actors has driven calls for privacy legislation for over a decade, but a comprehensive bill has not yet passed.

Because algorithmic systems depend on massive big data sets — and because so many connected “smart” devices that make up the Internet of Things (IoT) are powered by AI and ML capabilities — concerns about more widespread data collection will likely expand. AI, big data and the IoT mean we will live in a world of ambient computing. This means that algorithms will be ubiquitous, utilized in our homes and workplaces, and even on our bodies to monitor health and fitness. It is already the case that most Americans carry an algorithmic supercomputer with them at all times in the form of their smartphones.

The tracking and sensor capabilities of these and other connected devices will introduce continuous waves of policy concerns — and regulatory proposals — as new applications develop and more data is collected. Of course, that data collection is what ultimately makes algorithmic systems capable and effective. Heavy-handed regulation could, therefore, limit the potential benefits of algorithmic systems.[133] Last year’s major privacy proposal, the American Data Protection and Privacy Act (ADPPA), already included provisions demanding that large data handlers divulge information about their algorithms and undergo algorithmic design evaluations based on amorphous fairness concerns.

2) Bias and Discrimination

Other policy concerns flow from this first issue. For example, broader data collection and ubiquitous computing leads some to fear potential discrimination and bias in sophisticated algorithmic systems. Measures like the Algorithmic Justice and Online Platform Transparency Act have been introduced to “assess whether the algorithms produce disparate outcomes based on race and other demographic factors in terms of access to housing, employment, financial services, and related matters.” Last August, the FTC proposed a new rule on commercial surveillance and data security that incorporates provisions to address algorithmic error, or discrimination.[134] In October, the Biden administration also released a framework for an AI Bill of Rights that claims algorithmic systems are “unsafe, ineffective, or biased,” and recommended a variety of oversight steps.[135]

Bias, however, can mean different things to different people. Luckily, a large body of law and regulation already exists that could handle some of these claims, including the Civil Rights Act, the Age Discrimination in Employment Act and the Americans with Disabilities Act. Targeted financial laws that might address algorithmic discrimination include the Fair Credit Reporting Act and Equal Credit Opportunity Act. It remains to be seen how regulators and the courts will seek to enforce these statutes or supplement them.

3) Free Speech and Disinformation

There are other amorphous discrimination concerns about how the growth of algorithmic systems might affect free speech, social interactions and even the future of deliberative democracy. There are currently very heated debates about how algorithms are being used for online content moderation, but conservatives and liberals disagree about the nature of the problem. Some conservatives believe social media algorithms are biased against their political views, while some liberals feel that social media algorithms fuel hate speech and misinformation. The Biden administration ignited a firestorm of controversy last year with its Disinformation Governance Board, which would have created a bureaucracy in the Department of Homeland Security to police some of these issues.[136] The growth of large-language models such as ChatGPT is giving rise to still more concerns about how AI tools can be used to deceive or discriminate, even as many people are using such tools to find or generate beneficial new services.[137]

It is unclear how legislation could be crafted to balance these conflicting perspectives, but the Protecting Americans from Dangerous Algorithms Act is a proposed bill that would have regulators oversee how “information delivery or display is ranked, ordered, promoted, recommended, [and] amplified” using algorithms. This debate is linked to the push by many on both the left and right to reform or abolish Section 230 of the Telecommunications Act of 1996, the law that shields digital platforms from liability for content they host that is posted by users. At root, Section 230 protects the editorial discretion of tech platforms, including the ways they configure their algorithms for content moderation purposes. Section 230 has generated enormous economic impact and some controversy as many blame it for any number of social problems.[138] Major Supreme Court cases are pending that involve how social media operators use algorithms either to disseminate or screen content on their sites.

4) Kids’ Safety

Algorithms would also be regulated under many current kids’ safety bills.[139] Online child safety is one of the oldest digital policy debates and an area that has produced a near endless flow of regulatory proposals and corresponding court cases. Some of the most important internet court cases involved First Amendment challenges to legislative efforts to regulate online content in the name of child protection.

Today, critics on both the left and right accuse technology companies of creating algorithmic systems that are intentionally addictive or funnel inappropriate content to children. Last year, California passed an Age-Appropriate Design Code that would regulate algorithmic design in the name of child safety, and many states are following California’s lead with similar proposals. Meanwhile, Congress has considered the Kids Online Safety Act, a law that would require audits of algorithmic recommendation systems that supposedly targeted or harmed children. Many additional algorithmic regulatory efforts will likely be introduced this year that are premised on protecting children. Child safety measures are both the most likely to advance, but also the most likely to face protracted constitutional challenges, like earlier internet regulatory efforts.

5) Physical Safety and Cybersecurity

Another broad category of concern about AI and ML involves the physical manifestations or uses of algorithmic systems — especially in the form of robotics and IoT devices. AI is already baked into everything from medical diagnostic devices to driverless cars to drones. Existing regulatory agencies are already considering how their existing statutory authority might cover algorithmic innovations in medicine (Food and Drug Administration) and autonomous vehicles and drones (Department of Transportation). Agencies with broader authority, like the FTC and Consumer Product Safety Commission, have also considered how algorithmic systems might be covered through existing statutes and regulations.

The National Institute of Standards and Technology (NIST) also recently released a comprehensive Artificial Intelligence Risk Management Framework, which is “a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.”[140] This soft law effort built upon an earlier NIST Cybersecurity Framework that similarly crafted best practices for connected digital systems.

6) Industrial Policy and Workforce Issues

While most of the policy concerns surrounding AI involve questions about whether governments should limit or restrict certain uses or applications, another body of policy seeks to promote the nation’s algorithmic capabilities to ensure that the United States is prepared to meet the challenge of global competition with many other countries — especially China. Both the Obama and Trump administrations took steps to promote the development of AI technologies.[141]

Last year, Congress passed a massive industrial policy measure — the CHIPS and Science Act — that was often described as an “anti-China” bill. Additional programs and spending have been proposed. This type of algorithmic policymaking is probably easier to advance than most regulatory initiatives.

Another class of promotional activities involves AI-related workforce issues. The oldest concerns about automation involve fears about the displacement of jobs, skills, professions and entire industrial sectors. Fear about technological unemployment is what drove the Luddites to smash machines, and similar fears persist today.[142] For example, the Teamsters Union, which represents truck drivers, has worked to stop progress on federal driverless vehicle legislation for years.[143] Organized opposition to other algorithmic innovations could arrive in the form of formal restrictions on automation in additional fields. Even writers and artists are expressing concern about the potential disruptive impact associated with large language models like ChatGPT and other AI-enabled art generators.[144]

7) National Security and Law Enforcement Issues

There is a close relationship between the national security considerations surrounding AI and the industrial policy initiatives floated to bolster the nation’s computational capabilities in this field. Beyond promotional activities, however, there are growing concerns about how the military or domestic law enforcement officials might use algorithmic or robotic technologies. Some groups call for international rules to limit the use of lethal autonomous weapons.

Global control of AI risks is far more challenging than previous global technological risks, such as nuclear and chemical weapons. Those arms control efforts faced serious international coordination challenges, but algorithmic controls are far more difficult due to the intangible and quicksilver nature of digital code. Regardless, this issue will attract more attention as other countries besides China make strides in militaristic AI and robotic capabilities, creating what some regard as dangerous existential risks to global order.

For law enforcement, the specter of AI systems leading to automated justice or predictive policing raises fears about how algorithms might be used by law enforcement officials or the courts when judging or sentencing people.[145] Governmental uses of algorithmic processes will always raise greater concern and require broader oversight because governments possess coercive powers that private actors do not.

Summary of Current Policy Landscape

This list only scratches the surface in terms of the universe of AI policy issues. Algorithmic policy considerations are now being discussed in many other fields, including education, insurance, transportation, financial services, journalism, energy markets, intellectual property, retail and trade, and more. AI is the ultimate disruptor of the status quo, both culturally and economically. Eventually, almost every sector of the economy and every facet of society will be touched by the computational revolution in some fashion. This process will accelerate and the list of AI-related policy concerns will expand rapidly as it does.

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations.[146] There is a compelling interest in ensuring that AI innovations are developed and made widely available to society.[147] Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.[148]

__________

Key Takeaways:

· Defining the nature and scope of artificial intelligence and its many components and related subsectors is complicated. This fact creates many governance challenges.

· Many other things about AI complicate its governance, including the fact that it is both a general purpose and dual-use technology. AI builds upon knowledge and capabilities developed through many other important technologies and sectors in a combinatorial fashion, meaning that AI governance decisions will affect them as well.

· There are both strong and weak forms of AI, but public imagination and public policy has been too focused on hyper-powerful forms of strong AI that are distant and unlikely. The more important focus today should be on the challenges associated with more targeted applications of weak or narrow AI.

· AI has experienced many “springs” and “winters” over the past half century, reflecting waves of irrational exuberance and pessimism over its potential. Today the field is maturing rapidly.

· Every segment of the economy will be touched by AI in some fashion and AI developments will likely drive economic growth in the future. By extension, all policy matters and governance issues will eventually involve AI considerations in some fashion.

· AI technologies offer individuals and society meaningful improvements in living standards across multiple dimensions. The most profound of these will likely be what AI means for the practice of medicine and personalized health care.

· The AI policy landscape is evolving rapidly and legislative and regulatory proposals are multiplying. Almost every segment of society and sector of the economy will be touching by algorithmic innovations in some fashion. As that process unfolds, political interest in these topics will expand.

[See my “Running List of My Research on AI, ML & Robotics Policy” to learn more about AI policy developments.]

endnotes:

[1] Michael Wooldridge, A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going (Flatiron Books, 2020); Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus and Giroux, 2019); Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2015).

[2] “Artificial Intelligence,” Stanford Encyclopedia of Philosophy, July 12, 2018. https://plato.stanford.edu/entries/artificial-intelligence.

[3] U.S. Government Accountability Office, “Artificial Intelligence: Emerging Opportunities, Challenges, and Implications,” Technology Assessment, GAO-18–142SP, (Mar. 28, 2018), p. 15. https://www.gao.gov/products/gao-18-142sp.

[4] Ethem Alpaydin, Machine Learning (The MIT Press, 2021), p. 16.

[5] Louridas, Algorithms, p. xiii.

[6] Chris Meserole, “What is Machine Learning?” Brookings Institution, Oct. 4, 2018. https://www.brookings.edu/research/what-is-machine-learning.

[7] Adam Thierer, “Big Data, Innovation, Competitive Advantage & Privacy Concerns,” Technology Liberation Front, Apr. 27, 2012. https://techliberation.com/2012/04/27/big-data-innovation-competitive-advantage-privacy-concerns.

[8] Rishi Bommasani and Percy Liang, “Reflections on Foundational Models,” Stanford University Human-Centered Artificial Intelligence (2021). https://crfm.stanford.edu/2021/10/18/reflections.html.

[9] McKinsey & Company, “Exploring opportunities in the generative AI value chain,” Article, Apr. 26, 2023. https://www.mckinsey.com/capabilities/quantumblack/our-insights/exploring-opportunities-in-the-generative-ai-value-chain.

[10] McKinsey & Company, “What is generative AI?” Article, Jan. 19, 2023. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai.

[11] Rishi Bommasani, “On the Opportunities and Risks of Foundation Models,” Center for Research on Foundation Models (July 2021). https://arxiv.org/abs/2108.07258.

[12] K. Eric Drexler, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence,” Technical Report #2019–1 (Future of Humanity Institute, University of Oxford, 2019).

[13] Darcy W.E. Allen, Chris Berg, and Sinclair Davidson, The New Technologies of Freedom (American Institute for Economic Research, 2020), p. 95.

[14] Adam Thierer, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics,” Discourse, July 26, 2022, https://www.discoursemagazine.com/culture-and-society/2022/07/26/how-science-fiction-dystopianism-shapes-the-debate-over-ai-robotics/; Jill Lepore, “A Golden Age for Dystopian Fiction,” The New Yorker, June 5 & 12, 2017. https://www.newyorker.com/magazine/2017/06/05/a-golden-age-for-dystopian-fiction.

[15] Kevin Kelly, “The AI Cargo Cult: The Myth of a Superhuman AI,” Wired, Apr. 25, 2017. https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai.

[16] Zohar Atkins, “Is AI Sentient?” What is Called Thinking, June 13, 2022. https://whatiscalledthinking.substack.com/p/is-ai-sentient.

[17] Oren Etzioni, “No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity,” Technology Review, Sept. 20, 2016. https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity; Gary Marcus, “Artificial General Intelligence Is Not as Imminent as You Might Think,” Scientific American, June 6, 2022. https://www.scientificamerican.com/article/artificial-general-intelligence-is-not-as-imminent-as-you-might-think1.

[18] Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus and Giroux, 2019), p. 278 [Kindle edition.]

[19] Erik Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (The Belknap Press of Harvard University, 2021), p. 49.

[20] Erik Larson, “Silicon Valley Has Been Taken over by ‘Technological Kitsch’,” Fast Company, May 12, 2021. https://www.fastcompany.com/90635442/technological-kitsch.

[21] Larson, The Myth of Artificial Intelligence, p. 62.

[22] Quoted in Clive Thompson, Coders: The Making of a New Tribe and the Remaking of the World (Penguin Press, 2019), p. 302.

[23] Cade Metz, “A.I. Is Not Sentient. Why Do People Say It Is?” New York Times, Aug. 5, 2022. https://www.nytimes.com/2022/08/05/technology/ai-sentient-google.html.

[24] Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report, (Stanford University, Sept. 2021): 32–3, http://ai100.stanford.edu/2021-report.

[25] Nirit Weiss-Blatt, “The AI Doomers’ Playbook,” TechDirt, April 14, 2023. https://www.techdirt.com/2023/04/14/the-ai-doomers-playbook.

[26] Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,” Time, March 29, 2023. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[27] Steven Levy, “What Deep Blue Tells Us about AI in 2017,” Wired, May 23, 2017. https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017.

[28] “Google AI Defeats Human Go Champion,” BBC, May 25, 2017, https://www.bbc.com/news/technology-40042581.

[29] Joshua Sokol, “AI Keeps Mastering Games, But Can It Win in the Real World?” The Atlantic, Feb. 27, 2018. https://www.theatlantic.com/technology/archive/2018/02/ai-keeps-mastering-games-but-can-it-win-in-the-real-world/554312.

[30] Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (Public Affairs, 2017), p 7; Adam Thierer, “The Growing AI Technopanic,” Medium, Apr. 27, 2017. https://aboveintelligent.com/the-growing-ai-technopanic-5d6658b00fed.

[31] Joel Lehman, Jeff Clune, and Sebastian Risi, “An Anarchy of Methods: Current Trends in How AI is Abstracted in AI,” Intelligent Systems, Vol. 29, №6 (2014), p. 56–62, https://www.cs.utexas.edu/users/ai-lab/?lehman:is14.

[32] Hal R. Varian, “Computer Mediated Transactions,” American Economic Review, 100:2 (May 2010). https://www.aeaweb.org/articles?id=10.1257/aer.100.2.1.

[33] Alpaydin, Machine Learning, p. 34.

[34] Erik Brynjolfsson and Andrew McAfee, “The Business of Artificial Intelligence,” Harvard Business Review, July 18, 2017. https://hbr.org/2017/07/the-business-of-artificial-intelligence.

[35] Timothy F. Bresnahan and M. Trajtenberg, “General Purpose Technologies ‘Engines of Growth’?” Journal of Econometrics, 65:1 (1995), p. 83–108.

[36] Nicholas Crafts, “Artificial Intelligence as a General-purpose Technology: An Historical Perspective,” Oxford Review of Economic Policy, Vol. 37, №3 (Autumn 2021), p. 521–536. https://academic.oup.com/oxrep/article/37/3/521/6374675.

[37] National Security Commission on Artificial Intelligence, Final Report (2021), p. 1, https://www.nscai.gov; Jayshree Pandya, “The Dual-Use Dilemma of Artificial Intelligence,” Forbes, Jan. 7, 2019, https://www.forbes.com/sites/cognitiveworld/2019/01/07/the-dual-use-dilemma-of-artificial-intelligence.

[38] Adam Thierer, “Existential Risks & Global Governance Issues around AI & Robotics,” last revised Sept. 12, 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4174399.

[39] Alpaydin, Machine Learning, p. 83.

[40] Robert D. Atkinson, “’It’s Going to Kill Us!’ and Other Myths about the Future of Artificial Intelligence,” Information Technology & Innovation Foundation, June 2016. http://www2.itif.org/2016-myths-machine-learning.pdf.

[41] Alex Feerst, “The Use of AI in Online Content Moderation,” American Enterprise Institute (Sept. 2022). https://platforms.aei.org/the-use-of-ai-in-online-content-moderation.

[42] Arianna Johnson, “You’re Already Using AI: Here’s Where It’s At In Everyday Life, From Facial Recognition To Navigation Apps,” Forbes, April 14, 2023. https://www.forbes.com/sites/ariannajohnson/2023/04/14/youre-already-using-ai-heres-where-its-at-in-everyday-life-from-facial-recognition-to-navigation-apps/?sh=1996a1f927ac.

[43] Jacob Passy, “Expedia Wants ChatGPT to Be Your Travel Adviser,” Wall Street Journal, April 4, 2023. https://www.wsj.com/articles/expedia-chatgpt-ai-travel-app-22ffd00.

[44] Robin Fearon “AI Tools Help to Predict Extreme Weather and Save Lives,” Discovery, Aug. 2, 2022. https://www.discovery.com/science/ai-tools-help-to-predict-extreme-weather. “Deep learning can predict tsunami impacts in less than a second,” Phys.org, Dec. 27, 2022. https://phys.org/news/2022-12-deep-tsunami-impacts.html. “NASA-enabled AI Predictions May Give Time to Prepare for Solar Storms,” NASA, March 30, 2023. https://www.nasa.gov/feature/goddard/2023/sun/nasa-enabled-ai-predictions-may-give-time-to-prepare-for-solar-storms.

[45] “How AI-Powered Robots Fulfill Your Online Orders,” Last Week in AI, Jan. 25, 2022. https://lastweekin.ai/p/robot-picking.

[46] Christopher Mims, “How to Build AI That Actually Works for Your Business,” Wall Street Journal, July 23, 2022. https://www.wsj.com/articles/how-to-build-ai-that-actually-works-for-your-business-11658548830.

[47] Cem Dilmegan, “Top 15 Use Cases and Applications of AI in Logistics in 2022,” July 9, 2020, updates, May 29, 2022. https://research.aimultiple.com/logistics-ai.

[48] “Succeeding in the AI Supply-chain Revolution,” Article, Apr. 30, 2021. https://www.mckinsey.com/industries/metals-and-mining/our-insights/succeeding-in-the-ai-supply-chain-revolution.

[49] Adam Thierer, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” Richmond Journal of Law and Technology, 21:6 (2015). http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2494382.

[50] Christopher Mims, “Why the Future of the Computer Is Everywhere, All the Time,” Wall Street Journal, Oct. 29, 2022. https://www.wsj.com/articles/computer-technology-ambient-computing-11666992784.

[51] Alpaydin, Machine Learning, p. 9.

[52] A Roadmap for US Robotics From Internet to Robotics: 2020 Edition, Sept. 9, 2020. https://www.hichristensen.com/pdf/roadmap-2020.pdf.

[53] Suparna Biswas, Brant Carson, Violet Chung, Shwaitang Singh, and Renny Thomas, “AI-Bank of the Future: Can Banks Meet the AI Challenge?” McKinsey & Company, Sept. 19, 2020. https://www.mckinsey.com/industries/financial-services/our-insights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge.

[54] Maria Lopez Conde and Ian Twinn, “How Artificial Intelligence is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets,” International Finance Corporation, Note 75 (Nov. 2019). https://www.ifc.org/wps/wcm/connect/7c21eaf5-7d18-43b7-bce1-864e3e42de2b/EMCompass-Note-75-AI-making-transport-safer-in-Emerging-Markets.pdf?MOD=AJPERES&CVID=mV7VCeN.

[55] Ben Forgan, “What Robots Can Do for Retail,” Harvard Business Review, Oct. 1, 2020. https://hbr.org/2020/10/what-robots-can-do-for-retail.

[56] Louis Columbus, “10 Ways AI Has the Potential To Improve Agriculture In 2021,” Forbes, Feb. 17, 2021. https://www.forbes.com/sites/louiscolumbus/2021/02/17/10-ways-ai-has-the-potential-to-improve-agriculture-in-2021/?sh=454d747a7f3b. Loukia Papadopoulos, “New Farming Robot Uses AI to Kill 100,000 Weeds per Hour,” Interesting Engineering, April 27, 2021. https://interestingengineering.com/innovation/new-farming-robot-uses-ai-to-kill-100000-weeds-per-hour.

[57] Anne Hobson, “Artificial Intelligence is Set to Remake Event Experiences,” The Hill, Jan. 11, 2017. https://www.rstreet.org/2017/01/11/artificial-intelligence-is-set-to-remake-event-experiences.

[58] Franklin Wolfe, “How Artificial Intelligence Will Revolutionize the Energy Industry,” Harvard University Graduate School of Arts and Sciences, Special Edition on Artificial Intelligence, Aug. 28, 2017. https://sitn.hms.harvard.edu/flash/2017/artificial-intelligence-will-revolutionize-energy-industry. Scott Patterson, “Why AI Is the Next Big Bet for Climate Tech,” Wall Street Journal, June 1, 2023. https://www.wsj.com/articles/ai-climate-change-clean-energy-investment-e4242a23. Vidya Nagalwade, “Machine Learning can be used to improve energy use in cities,” TechExplorist, May 7, 2023. https://www.techexplorist.com/machine-learning-used-improve-energy-use-cities/60013.

[59] Justine Calma, “How Machine Learning Could Help Save Threatened Species from Extinction,” The Verge, Aug. 4, 2022. https://www.theverge.com/23290902/machine-learning-conservation-data-deficient-species-iucn-red-list-extinction-threat.

[60] Sara Randazzo, “Can Tech Boost Reading? Literacy Tools Come to Classrooms,” Wall Street Journal, Aug. 7, 2022. https://www.wsj.com/articles/literacy-technology-offers-new-ways-to-teach-kids-to-read-11659879846.

[61] Kelsey Reichmann, “How Is the Aviation Industry Enabling Innovation with Artificial Intelligence?” Aviation Today, Dec. 14, 2020. https://www.aviationtoday.com/2020/12/14/aviation-industry-enabling-innovation-artificial-intelligence.

[62] Mobility, “Artificial Intelligence Reshaping the Automotive Industry,” Future Bridge, Apr. 29, 2020. https://www.futurebridge.com/industry/perspectives-mobility/artificial-intelligence-reshaping-the-automotive-industry.

[63] Dan Castro and Joshua New, The Promise of Artificial Intelligence (Center for Data Innovation, Oct. 2016). https://datainnovation.org/2016/10/the-promise-of-artificial-intelligence.

[64] Tom Davidson, “Could Advanced AI Drive Explosive Economic Growth?” Open Philanthropy, Research Report, June 25, 2021. https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth.

[65] Grand View Research, “Artificial Intelligence Market Size Report, 2022–2030,” GVR-1–68038–955–5, April 2022. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market.

[66] Marcin Szczepański, “Economic Impacts of Artificial Intelligence (AI),” European Parliamentary Research Service, Briefing PE 637.967 (July 2019), p. 3. https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf.

[67] Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, and Raoul Joshi, “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy,” McKinsey Global Institute, Discussion Paper, Sept. 4, 2018. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.

[68] Alex Owen-Hill, “5 Super-Dangerous Jobs That Robots Can Do Safely,” ROBOTIQ, Oct. 8, 2019, last updated July 27, 2021. https://blog.robotiq.com/5-super-dangerous-jobs-that-robots-can-do-safely.

[69] J. Hunter Young, Kyle Richardville, Bradley Staats, and Brian J. Miller, “How Algorithms Could Improve Primary Care,” Harvard Business Review, May 6, 2022. https://hbr.org/2022/05/how-algorithms-could-improve-primary-care; PwC, What Doctor? Why AI and Robotics Will Define New Health (2017). https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/transforming-healthcare.html; Jordan Reimschisel, “The Robot That Saved My Life,” Medium, Apr. 27, 2017. https://aboveintelligent.com/that-robot-saved-my-life-6499d9a2f384.

[70] Anna Megdell, “Machine Learning Creates Opportunity for New Personalized Therapies,” University of Michigan Health Lab, Lab Notes, Sept. 27, 2022. https://labblog.uofmhealth.org/lab-notes/machine-learning-creates-opportunity-for-new-personalized-therapies.

[71] Lee Hood and Nathan Price, “The AI Will See You Now,” Wall Street Journal, April 7, 2023. https://www.wsj.com/articles/the-ai-will-see-you-now-5f8fba14.

[72] Steven Rosenbush, “DeepMind AI Lab Predicts Structure of Most Proteins,” Wall Street Journal, July 28, 2022. https://www.wsj.com/articles/deepmind-ai-lab-predicts-structure-of-most-proteins-11659048143.

[73] Justin Jackson, “Predicting protein folding from single sequences with Meta AI ESM-2,” Phys.org, March 23, 2023. https://phys.org/news/2023-03-protein-sequences-meta-ai-esm-.html.

[74] Sumathi Reddy, “How Doctors Use AI to Help Diagnose Patients,” Wall Street Journal, Feb. 28, 2023. https://www.wsj.com/articles/how-doctors-use-ai-to-help-diagnose-patients-ce4ad025.

[75] Laura Landro, “How Hospitals Are Using AI to Save Lives,” Wall Street Journal, Apr. 10, 2022. https://www.wsj.com/articles/how-hospitals-are-using-ai-to-save-lives-11649610000

[76] Ibid.

[77] Corinne Purtill, “How AI Changed Organ Donation in the US,” Quartz, Sept. 10, 2018. https://qz.com/1383083/how-ai-changed-organ-donation-in-the-us.

[78] “Researchers Use AI to Triage Patients with Chest Pain,” Science Daily, Jan. 23, 2023. https://www.sciencedaily.com/releases/2023/01/230117110422.htm. Paul McClure, “Machine learning algorithm a fast, accurate way of diagnosing heart attack,” New Atlas, May 15, 2023. https://newatlas.com/health-wellbeing/code-acs-machine-learning-algorithm-accurate-heart-attack-diagnosis.

[79] Cedars-Sinai, “Artificial Intelligence Tool May Help Predict Heart Attacks,” March 22, 2022, https://www.cedars-sinai.org/newsroom/artificial-intelligence-tool-may-help-predict-heart-attacks.

[80] University of Zurich, “Artificial Intelligence Improves Treatment in Women with Heart Attacks,” ScienceDaily, Aug. 29, 2022. www.sciencedaily.com/releases/2022/08/220829112918.htm.

[81] Tammy Lovell, “NHS rolls out AI tool which detects heart disease in 20 seconds,” Health Care IT News, March 16, 2022. https://www.healthcareitnews.com/news/emea/nhs-rolls-out-ai-tool-which-detects-heart-disease-20-seconds.

[82] Center for Disease Control and Prevention, “An Update on Cancer Deaths in the United States,” Feb. 28, 2022. https://www.cdc.gov/cancer/dcpc/research/update-on-cancer-deaths.

[83] Shania Kennedy, “Mayo Clinic ML Can Predict Pancreatic Cancer Earlier than Usual Methods,” Health IT Analytics, July 19, 2022. https://healthitanalytics.com/news/mayo-clinic-ml-can-predict-pancreatic-cancer-earlier-than-usual-methods.

[84] Center for Disease Control and Prevention, “An Update on Cancer Deaths.”

[85] Cameron Henderson, “UK Scientists Invent an Artificial Eye Which Can Pick up Early Oesophageal Cancer,” Daily Mail, July 23, 2022. https://www.dailymail.co.uk/health/article-11041985/British-scientists-invent-artificial-eye-pics-deadly-throat-cancer.html.

[86] Elizabeth Svoboda, “Artificial Intelligence is Improving the Detection of Lung Cancer,” Nature, Nov. 18, 2020. https://www.nature.com/articles/d41586-020-03157-9. Berkeley Lovelace Jr., et. al., “Promising new AI can detect early signs of lung cancer that doctors can’t see,” NBC News, April 11, 2023. https://www.nbcnews.com/health/health-news/promising-new-ai-can-detect-early-signs-lung-cancer-doctors-cant-see-rcna75982.

[87] Erin McNemar, “Artificial Intelligence Advances Breast Cancer Detection,” Health IT Analytics, Oct. 7, 2021. https://healthitanalytics.com/news/artificial-intelligence-advances-breast-cancer-detection. Georgina Torbet, “Google’s AI can detect breast cancer more accurately than experts,” Engadget, Jan. 1, 2020. https://www.engadget.com/2020-01-01-googles-ai-can-detect-breast-cancer-more-accurately-than-expert.html. Adam Satariano and Cade Metz, “Using A.I. to Detect Breast Cancer That Doctors Miss,” New York Times, March 6, 2023. https://www.nytimes.com/2023/03/05/technology/artificial-intelligence-breast-cancer-detection.html.

[88] National Cancer Institute, “Artificial Intelligence Expedites Brain Tumor Diagnosis during Surgery,” Cancer Currents Blog, Feb. 12, 2020, https://www.cancer.gov/news-events/cancer-currents-blog/2020/artificial-intelligence-brain-tumor-diagnosis-surgery. “Intel and Penn Medicine are developing an AI to spot brain tumors,” Christine Fisher, Engadget, May 11, 2020. https://www.engadget.com/intel-penn-medicine-brain-tumor-ai-151105509.html.

[89] Jon Fingas, “Microsoft AI helps diagnose cervical cancer faster,” Engadget, Nov. 10, 2019. https://www.engadget.com/2019-11-10-microsoft-ai-diagnoses-cervical-cancer-faster.html.

[90] Benjamin Hunter, Sumeet Hindocha, and Richard W. Lee, “The Role of Artificial Intelligence in Early Cancer Diagnosis,” Cancers (Basel), 14:6 (Mar. 2022), p. 1524. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8946688. Jon Fingas, “NVIDIA and Medtronic are building an AI-enhanced endoscopy tool,” Engadget, March 21, 2023. https://www.engadget.com/nvidia-and-medtronic-are-building-an-ai-enhanced-endoscopy-tool-161532723.html.

[91] Bendta Schroeder, “Using Machine Learning to Identify Undiagnosable Cancers,” MIT News, Sept. 1, 2022. https://news.mit.edu/2022/using-machine-learning-identify-undiagnosable-cancers-0901.

[92] Rachel Gordon, “Seeing Into the future: Personalized Cancer Screening with Artificial Intelligence,” MIT News, Jan. 21, 2022. https://news.mit.edu/2022/seeing-future-personalized-cancer-screening-artificial-intelligence-0121.

[93] Peter Ruegg-Eth Zurich, “AI Spots Antibiotic Resistance 24 Hours Faster than Old Methods,” Futurity, Jan. 18, 2022. https://www.futurity.org/antibiotic-resistance-artificial-intelligence-2682392-2.

[94] “Better than humans: Artificial intelligence in intensive care units,” Vienna University of Technology. ScienceDaily, May 11, 2023. https://www.sciencedaily.com/releases/2023/05/230511164553.htm. Laura Cech-Jhu, “AI Could Prevent Thousands of Sepsis Deaths Yearly,” Futurity, July 22, 2022. https://www.futurity.org/sepsis-artificiall-intelligence-hospitals-deaths-2771192-2. Emily Henderson, “New machine learning model estimates optimal treatment timing for sepsis,” News Medical Life Sciences, April 6, 2023. https://www.news-medical.net/news/20230406/New-machine-learning-model-estimates-optimal-treatment-timing-for-sepsis.aspx.

[95] Ibid.

[96] Brenda Goodman, “A new antibiotic, discovered with artificial intelligence, may defeat a dangerous superbug,” CNN, May 25, 2023. https://www.cnn.com/2023/05/25/health/antibiotic-artificial-intelligence-superbug/index.html.

[97] “Paralysis in the U.S.,” Christopher & Dana Reeve Foundation, last accessed June 11, 2023. https://www.christopherreeve.org/todays-care/paralysis-help-overview/stats-about-paralysis.

[98] Sunil Jacob, et. al., “Artificial Intelligence Powered EEG-EMG Electrodes for Assisting the Paralyzed,” IEEE Technology Policy and Ethics 4:4 (Sept. 2019), pp. 1–4. https://ieeexplore.ieee.org/document/9778118.

[99] Oliver Whang, “Brain Implants Allow Paralyzed Man to Walk Using His Thoughts,” New York Times, May 24, 2023. https://www.nytimes.com/2023/05/24/science/paralysis-brain-implants-ai.html.

[100] “Artificial Intelligence’s impact on the Lives of People with Disabilities,” Analytics Insights, https://www.analyticsinsight.net/artificial-intelligences-impact-on-the-lives-of-people-with-disabilities.

[101] Shania Kennedy, “AI Tool Can Detect Signs of Mental Health Decline in Text Messages,” Health IT Analytics, Oct. 13, 2022. https://healthitanalytics.com/news/ai-tool-can-detect-signs-of-mental-health-decline-in-text-messages. Dhruv Khullar, “Can A.I. Treat Mental Illness?,” The New Yorker, Feb. 27, 2023. https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness. Hazel Tang, “How AI can predict suicide before it’s too late,” AIMed, March 10, 2021. https://ai-med.io/special-report-neurosciences-mental-health/how-ai-can-predict-suicide-before-its-too-late.

[102] “How AI Can Help Design Drugs to Treat Opioid Addiction,” Neuroscience News, Feb. 18, 2023. https://neurosciencenews.com/ai-opioid-addiction-22531/.

[103] “AI accurately identifies normal and abnormal chest x-rays,” Radiological Society of North America, Science Daily, March 7, 2023. https://www.sciencedaily.com/releases/2023/03/230307114414.htm.

[104] Bill Gates, “The future our grandchildren deserve,” GatesNotes, Dec. 20, 2022. https://www.gatesnotes.com/The-Year-Ahead-2023#ALChapter6.

[105] Neel V. Patel, “Did AI Just Help Us Discover a Universal COVID Vaccine?” Daily Beast, March 9, 2023. https://www.thedailybeast.com/did-ai-just-help-us-discover-a-universal-covid-vaccine.

[106] Mark Gurman, “Apple Plans AI-Powered Health Coaching Service, Mood Tracker and iPad Health App,” Bloomberg, April 25, 2023. https://www.bloomberg.com/news/articles/2023-04-25/apple-aapl-developing-ai-health-coaching-service-ipados-17-health-app.

[107] Adam Thierer, “What I Learned about the Power of AI at the Cleveland Clinic,” Medium, May 6, 2022. https://medium.com/@AdamThierer/what-i-learned-about-the-power-of-ai-at-the-cleveland-clinic-e5b7768d057d.

[108] “Can the AI driving ChatGPT help to detect early signs of Alzheimer’s disease?,” Drexel University, ScienceDaily, Dec. 22, 2022. https://www.sciencedaily.com/releases/2022/12/221222162415.htm. Priyom Bose, “A machine-learning approach for the early diagnosis of Parkinson’s disease,” News Medical, May 11 2023. https://www.news-medical.net/news/20230511/A-machine-learning-approach-for-the-early-diagnosis-of-Parkinsons-disease.aspx.

[109] Cem Dilmegani, “Top 18 Healthcare AI Use Cases in 2022,” AI Multiple, May 9, 2022. https://research.aimultiple.com/healthcare-ai-use-cases.

[110] Thierer, “What I Learned about the Power of AI.”

[111] Marcus and Davis, Rebooting AI, p. 67.

[112] Pierre E. Dupont, “A Decade Retrospective of Medical Robotics Research from 2010 to 2020,” Science Robotics, Vol. 6, №60, November 10, 2021, https://www.science.org/doi/full/10.1126/scirobotics.abi8017.

[113] Dashun Wang and Albert-Laszlo Barabasi, The Science of Science (Cambridge University Press, 2021), p. 163.

[114] National Cancer Institute, “Can Artificial Intelligence Help See Cancer in New, and Better, Ways?” Cancer Currents Blog, Mar. 22, 2022, https://www.cancer.gov/news-events/cancer-currents-blog/2022/artificial-intelligence-cancer-imaging.

[115] Lee Hood and Nathan Price, “The AI Will See You Now,” Wall Street Journal, April 7, 2023. https://www.wsj.com/articles/the-ai-will-see-you-now-5f8fba14.

[116] Geoff Brumfiel, “Doctors are drowning in paperwork. Some companies claim AI can help,” NPR, April 5, 2p023. https://www.npr.org/sections/health-shots/2023/04/05/1167993888/chatgpt-medicine-artificial-intelligence-healthcare.

[117] “New in-home AI tool monitors the health of elderly residents,” University of Waterloo, Science Daily, March 23, 2023. https://www.sciencedaily.com/releases/2023/03/230323103402.htm.

[118] Jonathan Shaw, “The Medical-Robotics Revolution,” Harvard Magazine, May-June 2022. https://www.harvardmagazine.com/2022/05/features-medical-robotics-revolution.

[119] Shehmir Javaid, “4 Ways AI is Revolutionizing the Field of Surgery in 2022,” AI Multiple, May 31, 2022. https://research.aimultiple.com/ai-in-surgery.

[120] Joao Medeiros, “The Daring Robot Surgery That Saved a Man’s Life,” Wired, May 18, 2023. https://www.wired.com/story/proximie-remote-surgery-nhs/.

[121] Filip Piekniewski, “AI Winter Is Well on Its Way,” Piekniewski’s Blog, May 28, 2018. https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way.

[122] Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study 281 (March 2023). https://www.rstreet.org/research/getting-ai-innovation-culture-right.

[123] Peter Lee, Carey Goldberg, Isaac Kohane, The AI Revolution in Medicine: GPT-4 and Beyond (Pearson, 2023). https://www.amazon.com/AI-Revolution-Medicine-GPT-4-Beyond/dp/0138200130.

[124] This section is adapted from: Adam Thierer, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” R Street Institute Blog, Feb. 9. 2023. https://www.rstreet.org/commentary/mapping-the-ai-policy-landscape-circa-2023-seven-major-fault-lines.

[125] Melanie Mitchell, “What Does It Mean to Align AI With Human Values?” Quanta Magazine, Dec. 13, 2022. https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213.

[126] H.R.6580 — “Algorithmic Accountability Act of 2022,” 117th Congress (2021–2022). https://www.congress.gov/bill/117th-congress/house-bill/6580.

[127] Neil Chilson & Adam Thierer, “The Problem with AI Licensing & an ‘FDA for Algorithms,’” Federalist Society Blog, June 5, 2023. https://fedsoc.org/commentary/fedsoc-blog/the-problem-with-ai-licensing-an-fda-for-algorithms.

[128] Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022). https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.

[129] Ryan Hagemann, Jennifer Huddleston Skees & Adam Thierer, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Colorado Technology Law Journal 17 (2018), pp. 37–129. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3118539.

[130] Adam Thierer, “A Brief History of Soft Law in ICT Sectors: Four Case Studies,” Jurimetrics 61 (Fall 2021), pp. 79–119.

[131] Adam Thierer, “Why is the US Following the EU’s Lead on Artificial Intelligence Regulation?” The Hill, July 21, 2022.

[132] Neil Chilson & Adam Thierer, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations,” Regulatory Transparency Project of the Federalist Society, Nov. 2, 2022. https://rtp.fedsoc.org/paper/the-coming-onslaught-of-algorithmic-fairness-regulations.

[133] Orly Lobel, “The Law of AI for Good,” San Diego Legal Studies Paper №23–001 (Jan. 2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4338862.

[134] Federal Trade Commission, “Trade Regulation Rule on Commercial Surveillance and Data Security,” Aug. 22, 2022. https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security.

[135] White House, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, Oct. 2022. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[136] Adam Thierer and Patricia Patnode, “Disinformation about the Real Source of the Problem,” Real Clear Policy, May 23, 2022. https://www.realclearpolicy.com/articles/2022/05/23/disinformation_about_the_real_source_of_the_problem_833681.html.

[137] Ian Bogost, “ChatGPT Is Dumber Than You Think,” The Atlantic, Dec. 7, 2022. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386. RapidMotion, “Exploring the Benefits of ChatGPT and Intelligent Automation for Businesses in 2023,” last accessed Feb. 26, 2023. https://www.rapidmation.com/exploring-the-benefits-of-chatgpt-and-intelligent-automation-for-businesses-in-2023.

[138] Ethan Wham, “An Economic Case for Section 230,” Sept. 6, 2019. https://www.project-disco.org/innovation/090619-an-economic-case-for-section-230.

[139] Taylor Barkley, “What Should Policymakers Do about Social Media and Minors?” Center for Growth and Opportunity Research in Focus, Jan. 18, 2023. https://www.thecgo.org/research/what-should-policymakers-do-about-social-media-and-minors.

[140] National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100–1 (Jan. 2023). https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial.

[141] Adam Thierer, “U.S. Artificial Intelligence Governance in the Obama–Trump Years,” IEEE Transactions on Technology and Society 2:4 (2021).

[142] James Pethokoukis, “The Case against Mass Technological Unemployment. (And What Happens If I’m Wrong),” AEI Blog, Jun. 27, 2022. https://www.aei.org/articles/the-case-against-mass-technological-unemployment-and-what-happens-if-im-wrong.

[143] David Shepardson, “U.S. push for self-driving cars faces union, lawyers opposition,” Reuters, Jun. 16, 2021. https://www.reuters.com/business/autos-transportation/us-push-self-driving-cars-faces-union-lawyers-opposition-2021-06-16.

[144] James Hookway, “AI Generated Art for a Comic Book. Human Artists Are Having a Fit,” Wall Street Journal, Jan. 29, 2023. https://www.wsj.com/articles/ai-generator-art-midjourney-zarya-11674856712.

[145] Pranshu Verma, “The never-ending quest to predict crime using AI,” Washington Post, Jul. 15, 2022. https://www.washingtonpost.com/technology/2022/07/15/predictive-policing-algorithms-fail.

[146] Adam Thierer, “Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing,” Medium, May 16, 2023. https://medium.com/@AdamThierer/here-come-the-code-cops-senate-hearing-opens-door-to-fda-for-algorithms-ai-occupational-65b16d8f587d.

[147] Marc Andreessen, “Why AI Will Save the World,” Marc Andreessen Substack, June 6, 2023. https://pmarca.substack.com/p/why-ai-will-save-the-world. Adam Thierer, “What OpenAI’s Sam Altman Should Say at the Senate AI Hearing,” R Street Institute Blog, May 15, 2023.

[148] Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study №281 (March 2023). https://www.rstreet.org/research/getting-ai-innovation-culture-right.

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer