We Used AI to Analyze Congressional Hearings on AI. Here’s What We Learned.
Since the release of ChatGPT in November 2022, artificial intelligence has captured the nation’s attention, and with it, the attention of Congress. Lawmakers used the technology to write speeches, draft legislation, and even created deepfakes of their own voices to open hearings. The sophistication of generative artificial intelligence tools also sparked concerns about misuse in elections and scams, and raised questions about the existential risks posed by AI-powered weapons. We watched lawmakers learn about the technology in real time and grapple with the dangers and potential benefits it brings.
To better understand the progress made in the last year, we reviewed all 28 hearings Congress held on artificial intelligence and used Levity, an AI tool, to dissect the discussion, employing a topic analysis model to analyze lawmaker questions and testimony and determine subject matter.
One important caveat: the dataset we analyzed does not include transcripts from Senator Schumer’s insight forums, which played an important role in advancing Congress’s understanding of artificial intelligence in the last year. These forums were not public, so we do not have transcripts to analyze them, but we do know the topics explored track with the trends that emerged through this year’s hearings.
Overall, we found that Congress is eager to put guardrails in place for the new technology, as well as determine the best ways to harness it for good. However, as usual, Democrats and Republicans have different concerns.
Methodology
To examine this year’s AI hearings, our team pulled hearing transcripts for 28 AI hearings in the House and Senate this year. Lawmaker testimonies in each hearing were then run through an AI topic analysis model, which scanned for mentions of 25 different AI-related topics. For a full list of topics and hearings included in our analysis, see our data table on congressional hearings and AI topic frequency. Opening remarks by lawmakers, closing remarks, and each line of questioning were counted as individual “testimonies,” analyzed individually by the AI model. Witness testimony was excluded from the analysis in order to narrowly focus on lawmaker interest.
For each hearing, we then determined the percentage of lawmaker testimonies that mentioned each topic. For example, out of twenty lawmaker testimonies in the May 16 Senate Judiciary hearing with Sam Altman, our AI model found that five lawmaker testimonies highlighted AI’s “threat to humanity,” for a total of 20 percent of all testimonies in that hearing.
We also analyzed the percentage of testimonies from Republicans and Democrats that mentioned each topic over the course of the year. For example, out of the 151 recorded testimonies from Republican lawmakers in AI hearings this year, 50 included a focus on the theme of “China,” approximately 31 percent.
The AI topic analysis model was allowed to assign more than one topic to each testimony. Transcripts used for this analysis were pulled from PoliticoPro and from TechPolicy.Press.
Republicans and Democrats talk about AI differently.
The graph above shows the divergence in concerns between Democrats and Republicans in hearings throughout the year. Divergence is measured by comparing the relative frequency of each topic within total Democratic testimony vs. total Republican testimony. For example, the topic of discrimination was mentioned in roughly 20% of testimony from Democrats and 3% of testimony from Republicans. Thus, divergence for the discrimination topic is 17%, the difference between the prevalence of discussion on discrimination.
The topics with gray bars, like the need for research and regulation, were mentioned with roughly equal frequency by Democrats and Republicans. Topics with blue bars are those mentioned more frequently by Democrats, while the topics with red bars represent those mentioned more frequently by Republicans.
The comparative differences in party focus reflect larger trends in other policy areas. When Republicans took control of the House this year, one of the first votes they held established a select committee on China. In a year dominated by strikes and labor disputes, including demands by SAG-AFTRA to limit studios’ use of AI, it makes sense that Democrats would spend more time than Republicans examining the risks and opportunities AI presents for workers.
We used Levity to explore Congress’s focus on these topics even further, tracking each time they were mentioned in questions or remarks during a hearing and analyzing the change over the course of the year.
Overall, Congress is eager to understand and regulate AI quickly.
Lawmakers repeatedly said they were wary of approaching AI the same way they feel they approached social media. As we saw in the graph above, there is bipartisan agreement that some sort of regulation is needed for AI. It was a consistent theme in hearings throughout the year.
However, regulatory proposals have changed throughout the year. Early on, lawmakers were focused on an overarching framework to regulate the new technology, like requiring a license to develop any artificial intelligence tool or requiring AI developers to disclose the model weights and input data used to train their models, which we’ve tracked as calls for “transparency”.
Interestingly, we found some intra-party differences between transparency and licensing requirements. Using the Lugar Center’s Bipartisan Index, we divided Democrats who participated in AI hearings into 3 groups, “moderates”, “liberals”, and “progressives”, with moderates being the most likely to work with Republicans on a bill and progressives being the least likely to work with Republicans. We found that progressive Democrats were more likely to focus on licensing requirements, while moderates and liberals seemed to favor transparency requirements.
As the frequency graphs show, discussion of comprehensive regulatory frameworks like licensing requirements gained momentum during early hearings but declined as lawmakers learned more about the technology and understood the wide variety of use cases.
The challenge of imposing a single regulatory scheme on all artificial intelligence tools was summed up well by Senator Ossoff in an early Senate Judiciary hearing:
As time went on, Congress shifted to a risk-based, sectoral approach, rather than a single regulatory entity or scheme.
At the beginning of the year, the committees traditionally charged with regulating new technologies–Judiciary and Commerce–dominated the discussion. However, by the end of the year, committees with more specific sectoral focuses joined the conversation and explored more specific risks and applications of AI. Hearings in the House Committee on Oversight and Reform, Senate Committee on Health, Education, Labor, and Pensions, and the Senate Committee on Aging demonstrated the wide range of sectors and industries that could be affected by artificial intelligence.
There was also growing recognition that many concerns with artificial intelligence could be addressed with existing proposals–like a comprehensive federal privacy law.
President Biden’s Executive Order on AI represents the culmination of the year-long discourse. The Executive Order focuses on specific risks and use cases of the new technology, rather than attempting to regulate the industry as a whole. The order also relies on existing agencies and expertise across the federal government, rather than tasking one specific agency with oversight or creating a new one.
There were two consistent themes in 2023: concerns about misinformation and competition with China.
Throughout the year, the two topics were mentioned in nearly every hearing.
Lawmakers identified the risks of generative AI to create sophisticated misinformation campaigns early in the year. Concerns about misinformation are not new, but the sophistication of generative AI tools and the approach of the 2024 election exacerbated them. Lawmakers remain concerned that AI-generated misinformation will be harder to identify and harder to contain.
Similarly, concerns about China outpacing the U.S. in the development of artificial intelligence tools and shaping global norms about its development and use were raised consistently throughout the year. As the recent debate among Republican presidential candidates about TikTok influencing American views shows, we will only be hearing more about China’s technological influence in 2024.
Looking ahead to 2024
Artificial intelligence will remain a focus for the federal government, especially its implications for the 2024 election. We have already seen a last-minute attempt by Senator Hawley to exempt AI-generated content from Section 230 protections, offering a hint of the debate to come about synthetic content and platforms’ obligations in an election year.
The Executive Order on AI also tasked nearly every federal agency with investigating uses of artificial intelligence within their jurisdiction and developing guidelines and best practices for the private sector, as well as legislative recommendations, by mid-2024. The Executive Order also created the White House Artificial Intelligence Council and tasked it with coordinating any AI-related policies. With that framework in place, we expect even more activity on artificial intelligence to come in 2024.
Chamber of Progress (progresschamber.org) is a center-left tech industry association promoting technology’s progressive future. We work to ensure that all Americans benefit from technological leaps, and that the tech industry operates responsibly and fairly.
Our work is supported by our corporate partners, but our partners do not sit on our board of directors and do not have a vote on or veto over our positions. We do not speak for individual partner companies and remain true to our stated principles even when our partners disagree.