Effective Altruism and Semiconductor Export Controls: circumstantial evidence

reasons to think longtermists played a key part in shaping the biden administration policy

Jacob Davis
9 min readApr 4, 2023

summary

Policy papers, congressional testimony, and other publicly available documents by longtermists in national politics over the last few years proposed semiconductor export controls much like those eventually implemented by the Biden white house in October 2022; prominent longtermists with a history of support for such controls were well-placed in the Biden administration to extensively affect the policy; and the language of the policy reflects similar motivations to these longtermists’. All of this suggests direct longtermist influence on the crafting of the controls, confirming community rumors of effective altruist involvement.

introduction

Several months ago, rumors began circulating in the effective altruist (EA) community that effective altruists had been directly involved in crafting the Biden White House’s semiconductor export controls (first published and implemented in October of 2022). These rumors were contentious. They centered around leaked screenshots from private Discord servers, and those appearing in the screenshots insist their words were exaggerated and taken out of context, and that they had already been retracted by the time of the leak; meanwhile, others argue these retractions amounted to mere reframings.

According to the rumors, those involved in the policy construction were motivated by concerns about “AI safety” (the perceived need to align a future artificial superintelligence with humanity’s values and interests) and hoped the policy would both help curb ai development generally and better cement American control over the field specifically (the US being widely regarded among EA’s working on the problem as preferable to China, both on AI and more broadly). AI safety is among the principal, defining concerns of “longtermism,” a (controversial) school of thought within effective altruism that places central emphasis on the future of humanity over extremely long timescales.

the leaked screenshots, redacted for anonymity

I do not have special knowledge or insight as to the particulars of these leaks and the controversy surrounding them (which is why I have named none of the parties). And I have no other direct evidence of immediate ea involvement in the Biden administration’s policymaking about china. what I do have, and what Iwill be presenting here, is evidence that established longtermist members of the ea community were very well-situated to influence the administration’s choice in imposing semiconductor sanctions on china, that they had an “AI safety”-motivated interest in the us implementing them, and that the policy itself is consistent with such an influence. Together, these provide powerful circumstantial evidence for direct and extensive longtermist involvement in the writing of the policy.

opportunity

Prominent effective altruists were well-placed to direct the biden white house on its china tech export policy.

  • Jason Matheny, CEO of the rand corporation, is a “longtermist” ea concerned with AI safety. during 2021 (source) and 2022 (source, source), he served in the white house’s Office of Science and Technology Policy (OSTP) as deputy assistant to the president for technology and national security and deputy director for national security (the head of the OSTP national security division). During this tenure at the OSTP, he would have been ideally positioned to influence the crafting of the export controls eventually published in october.
  • at Georgetown, Matheny founded the Center for Security and Emerging Technology (CSET), which tries to place longtermists in government positions. Analysts have noted that longtermism is “baked into their viewpoint.”
  • Saif Khan, director for technology and national security at the White House National Security Council since april of 2021, is a former member of the CSET. Like Matheny himself, he would have had the perfect opportunity to influence any White House policy on technology trade. Indeed, it is doubtful that either of them, given their high-level and domain-relevant positions in the administration, would have been uninvolved in policy decisions like the October export controls.
  • Kevin Wolf, senior fellow for CSET since february of 2022, has a long history of service in and connections with us export policy, serving as assistant secretary of commerce for export administration from 2010–2017 (among other work in the field, both private and public). While not himself employed by the Biden White House, his extensive history with the US export regulation system would be valuable to anyone aspiring to influence policy on the subject.
  • This February, Wolf provided congressional testimony on “Advancing National Security and Foreign Policy Through Sanctions, Export Controls, and Other Economic Tools” praising the October controls and urging further policy in the same vein. In his address, he claims knowledge of the specific motivations of the controls’ writers:

BIS did not rely on ECRA’s emerging and foundational technology provisions when publishing this rule so that it would not need to seek public comments before publishing it. (Whether the rule should have been published as proposed warrants a separate discussion.)

motivation

Over the years leading up to (and months following the release of) the export controls, effective altruists (including Matheny, Khan, and others at the CSET) interested in policy informed by AI safety have demonstrated a keen interest in restricting us-china microchip trade. Here is a list of pertinent examples:

  • In 2019, the National Security Commission for Artificial ntelligence (NSCAI) issued a report urging that, for the sake of continued US AI dominance over China, the “United States should commit to a strategy to stay at least two generations ahead of China in state-of-the-art microelectronics” and

modernize export controls and foreign investment screening to better protect critical dual-use technologies — including by building regulatory capacity and fully implementing recent legislative reforms, implementing coordinated export controls on advanced semiconductor manufacturing equipment with allies, and expanding disclosure requirements for investors from competitor nations.

  • the commission included Matheny and was headed by Eric Schmidt, who runs the philanthropic venture Schmidt Futures, which has deep ties with the EA community, and is otherwise heavily involved in projects like the super-PAC Future Forward with close links to the EA/longtermist movement.
  • In 2020, Matheny testified before congress on policy surrounding AI, singling out the urgency of maintaining American dominance in AI research and development. To this end, he recommended export controls for semiconductors:

The United States should strengthen U.S.-based semiconductor manufacturing to reduce supply chain risks and create high-quality jobs at home. At the same time, we should work with our allies to ensure that democracies remain at the leading edge of microelectronics by investing in joint research programs and by enforcing multilateral export controls on the semiconductor manufacturing equipment needed to produce advanced chips.

We must better collaborate with allies on R&D for AI safety and security; test & evaluation, validation & verification (TEVV) of AI systems; and testbeds and standards for AI development. We must also identify opportunities to collaborate with competitors, including China, to build confidence and avoid races to the bottom. We should invest in new types of AI technologies that protect privacy and other civil liberties, and tightly control exports of American technology to human rights abusers, such as Chinese companies using advanced AI systems for surveillance.

  • Karson Elmgren, research analyst at CSET, as another illustration, published a piece last november on the semiconductor controls and the broader role of semiconductors in Chinese policy for the first issue of asterisk magazine, a newly minted EA magazine. Elmgren has formerly worked with OpenAI (an AI safety organization), lists AI safety as one of his chief research interests, and in June of 2022 wrote a report for CSET suggesting “that the United States expand its collection of open-source intelligence and adopt new export control measures based on high-end chip features” in order to “curtail Chinese military access to AI chips.”

influence

The language of the policy itself invokes concerns about Chinese AI development in motivating the controls:

Advanced computing items and “supercomputers” can be used to enhance data processing and analysis capabilities, including through artificial intelligence (AI) applications. The PRC is rapidly developing exascale supercomputing capabilities and has announced its intent to become the world leader in AI by 2030. These advanced systems are capable of sophisticated data processing and analysis that has multiple uses, and are enabled by advanced ICs. These systems are being used by the PRC for its military modernization efforts to improve the speed and accuracy of its military decision making, planning, and logistics, as well as of its autonomous military systems, such as those used for cognitive electronic warfare, radar, signals intelligence, and jamming. Furthermore, these advanced computing items and “supercomputers” are being used by the PRC to improve calculations in weapons design and testing including for WMD, such as nuclear weapons, hypersonics and other advanced missile systems, and to analyze battlefield effects. In addition, advanced AI surveillance tools, enabled by efficient processing of huge amounts of data, are being used by the PRC without regard for basic human rights to monitor, track, and surveil citizens, among other purposes.

With this rule, BIS seeks to protect U.S. national security and foreign policy interests by restricting the PRC’s access to advanced computing for its military modernization, including nuclear weapons development, facilitation of advanced intelligence collection and analysis, and for surveillance. BIS intends to impose controls on items subject to the EAR and U.S. person activities to limit the PRC’s ability to obtain advanced computing chips or further develop AI and “supercomputer” capabilities for uses that are contrary to U.S. national security and foreign policy interests.

The concerns listed about Chinese uses of artificial intelligence are almost all discussed explicitly in the CSET papers linked above. Compare, for example,

Furthermore, these advanced computing items and ‘supercomputers’ are being used by the PRC to improve calculations in weapons design and testing including for WMD, such as nuclear weapons, hypersonics and other advanced missile systems, and to analyze battlefield effects. In addition, advanced AI surveillance tools, enabled by efficient processing of huge amounts of data, are being used by the PRC without regard for basic human rights to monitor, track, and surveil citizens, among other purposes.

with

State-of-the-art computer chips underpin many of today’s strategically important emerging technologies, including artificial intelligence, 5G, autonomous drones, and surveillance tools. They also power supercomputers, which are essential for everything from cryptography to the design of hypersonic weapons and the latest generation of nuclear weapons. In addition to foreseeable uses, it is safe to bet that chips will play a central role in future generations of advanced weapons.

These similarities in emphasis are what we would expect if the authors of the papers and their friends were themselves at least partly immediately responsible for the content of the policy.

conclusion

Together, the information detailed above amounts to considerable circumstantial evidence for coordinated longtermist involvement in the biden administration’s semiconductor export controls, deriving from concerns about “AI safety”. Established longtermists were among the staffers best positioned to influence such policy directly and extensively; both individually and as members of CSET, they were enthusiastic about implementing exactly such controls for reasons related to AI; and the text of the policy itself reflects these same motivations. In light of the increasing public interest in AI safety, effective altruism, and longtermist thought, I believe this potential (indeed, probable) connection merits special attention.

--

--