Photo by Possessed Photography on Unsplash

TECHNOLOGY

People With More Knowledge of Artificial Intelligence (AI) Prefer Stronger Regulation of the Technology

New experimental evidence suggests as more people learn about AI, support will grow for new regulations

Christopher Witko
3Streams
Published in
4 min readJan 24, 2024

--

Across affluent democracies, artificial intelligence (AI) is expected to transform many occupations in the coming years. According to one estimate, made even before the latest AI advances, nearly 50% of U.S. jobs could be challenged by technology in the relatively near future. Regardless of whether particular jobs will actually disappear, few occupations will be untouched by today’s technological advancements.

Though there is uncertainty about the impact of AI on jobs, workers are, nonetheless, forming opinions on this matter. While prior rounds of automation were more likely to engage in repetitive routine tasks performed by blue collar workers, technologies currently proliferating using AI, such as large language models (LLMs), are expect to have a greater impact on white collar, professional workers.

As governments wrestle with the question of whether to regulate AI, how the policy attitudes of these different types of workers are responding to this new wave of technology is an open question. Workers exposed to or who mistakenly believe they are exposed to certain labor market threats often prefer regulating them (think of immigration or trade).

Do workers concerned about AI do the same? The short answer is: yes.

Prior research has examined how exposure to technology or information about technology in the workplace shapes welfare or candidate and party preferences, but there is little research into regulatory attitudes.

In a working paper entitled “Self-interest and Preferences for the Regulation of Artificial Intelligence” Tobias Heinrich and I use an experimental approach to examine how exposure to information about the workplace impacts of artificial intelligence shapes opinions about its regulation. To do so, we show subjects one of two types of information — or treatments — about the impact of AI in the workplace. One treatment explains that AI will transform white collar jobs that are primarily cognitive and non-routine, and the other discusses the ability of AI to challenge jobs comprised of routine, non-cognitive tasks. We contrast these treatments with a placebo group exposed to information about how AI is being used by those in a niche field (scientists), with minimal labor market implications.

We examine how exposure to the treatments shapes expected future income (EFI) for one’s self or other similar workers, and ultimately regulatory attitudes using a set of questions about preferences for different entities regulating AI (e.g. federal and state governments) and different types of AI regulation, and whether regulating AI is an important issue (salience).

We also allow the effect of the exposure to information to vary based on whether people have high or low existing knowledge of artificial intelligence. We expected that people exposed to information about the workplace impacts would prefer more regulation than those exposed to the placebo, and even more so if the treatment referred to their type of job. As a secondary outcome we also examine whether exposure to the treatments shapes preferences for welfare preferences.

On average, the two AI treatments have little effect on expectations for one’s own income or that of similar workers and little relationship with regulatory or welfare policy attitudes for the entire sample. However, we find that people with high knowledge of technology, exposed to either of the two AI treatments, prefer more regulation and welfare spending compared to similarly knowledgeable people in the placebo condition.

The top row of the figure below presents the effect of exposure to any AI job treatment on expected future incomes for one’s self and others in the left panel, the direct effect on regulatory attitudes in the middle, with total effects in the right panel, which are basically equivalent to the direct effects since exposure does not shape wage expectations (which means some other mechanism than expected reduced wages is shaping support for regulation and welfare spending for those with high knowledge exposed to the informational treatments). In the next two rows we see similar results for regulatory salience and welfare attitudes.

Our findings are important because they show that many people are not responsive to information about the impact of technology in the workplace, but that priming the labor market impact of AI can nevertheless shape demands for its regulation among those with existing knowledge of AI, who are presumably better positioned to advocate for its regulation. This effect exists for cognitive workers exposed to any AI treatment and not just the one more tailored to their current occupation, indicating some level of non-selfish response.

Our findings have broad implications for AI politics and policymaking. The discussion of regulating AI is happening in many countries, including the U.S., but there is very little study of regulatory policy attitudes, which are potentially important to these debates in democracies. In addition, our research adds to a growing literature showing that people are seemingly not strongly responsive to the threat of technology in their policy attitudes, compared to threats from foreign trade and immigration.

Thus, one major implication is that it is unlikely that there will be mass demand for regulation in response to AI in the near term. Instead, it seems a smaller group of relative specialists will shape policy alongside interest groups, like in many highly technical policy areas. We should note, however, that knowledge of AI is not fixed, and as these technologies are deployed many people will become more knowledgeable. Furthermore, AI may become more politicized, like trade and immigration. Either of these developments could have important implications for regulatory and welfare policy attitudes.

--

--

Christopher Witko
3Streams

Christopher Witko is Professor and Associate Director @PSUPublicPolicy