Responses to Jack Clark’s AI Policy Tweetstorm

Adam Thierer
28 min readAug 8, 2022

--

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I’ve ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I’m currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML).

Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm.

But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my “general critique” of Clark’s tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.

To make things easier, I numbered and bolded each of Clark’s tweets and then grouped several of them together thematically before offering some brief responses. For easier reading, you can find the unrolled version of Clark’s entire tweetstorm here. I don’t respond to all his points here, however. I ran out of gas after about 6,000 words. LOL.

1/ A surprisingly large fraction of AI policy work at large technology companies is about doing ‘follow the birdie’ with government — getting them to look in one direction, and away from another area of tech progress

Clark needs to provide some examples to substantiate this claim better, but there is one way I agree with his argument. During my 30-year career in the tech policy world, I have witnessed many tech companies playing a very specific type of “follow the birdie” game with government. It is a game more focused on throwing competitors (or rival technologies) under the bus by pretending those others companies are the real problem that governments should focus on. The easiest way to divert attention from your own issues is to suggest someone else is causing bigger issues! The history of modern tech antitrust and intellectual property squabbles consists of companies playing this game to varying effect. But Clark seems to be suggesting that companies are doing this for specific technological capabilities in the AI space. I can’t provide further comment without knowing more specifics.

2/ The vast majority of AI policy people I speak to seem to not be that interested in understanding the guts of the technology they’re doing policy about

I think Clark makes a valid point here, but it’s probably also true of a great many sectors and technologies. Again, this is my general critique of much of what he says throughout the tweetstorm. Many people in the Science and Technology Studies field seem eager to rush to judgment about hypothetical harms raised my AI, ML, or robotic technologies without spending much time really digging into the nuts and bolts of how these systems work in practice, or what there true capabilities entail.

3/ China has a much more well-developed AI policy approach than that in Europe and the United States. China is actually surprisingly good at regulating things around, say, synthetic media.

48/ AI may be one of the key ingredients to maintaining political stability in the future. If Xi is able to retain control in China, the surveillance capabilities of AI will partially be why. This has vast and dire implications for the world — countries copy what works.

It’s fair to say that China is out ahead of the U.S. on AI regulation, but it is not ahead of the European Union. As I noted in a recent essay on “Why the Future of AI Will Not Be Invented in Europe,” the EU has charted an ambitious regulatory agenda for AI, ML, and robotics that includes a forthcoming Artificial Intelligence Act. That law will establish a comprehensive top-down regulatory framework for the sector that will likely decimate AI innovation across the continent. And this comes in addition to several other new (Digital Services Act, Digital Markets Act) and existing (GDPR, various other data protection rules) regulations that the EU imposes on the data economy.

The Chinese regulatory system for data-driven sectors continues to evolve but is characterized by a strange mix of permissionless innovation in select instances (mostly for state-favored “national champions”), but also arbitrary and autocratic actions at other times (with major Internet CEOs “disappearing” for a few months after crossing the CCP on various priorities). It’s hard to know what to make of that governance system at this stage because there’s plenty we don’t know about how exactly the CCP really calls the shots behind the scenes for major tech companies.

4/ There is not some secret team working for a government in the West building incredibly large-scale general models. There are teams doing applied work in intelligence.

I think that’s right, but I’m not sure we really have a very clear idea of exactly what the intelligence community has done with general models so far on this front. I’ve tried to track the DoD’s unclassified work on AI policy and monitor other developments in this space, but there’s so much we don’t know that it makes it hard to evaluate current capabilities/intentions.

5/ The real danger in Western AI policy isn’t that AI is doing bad stuff, it’s that governments are so unfathomably behind the frontier that they have no notion of _how_ to regulate, and it’s unclear if they _can_

What Clark is referring to here goes by the name of “the pacing problem” and it’s a dominant theme in much of the literature surrounding AI/ML policy. The pacing problem refers to the quickening pace of technological developments and the inability of governments to keep up with those changes. Another name for the pacing problem is the “law of disruption,” a phrase coined by technology policy analyst Larry Downes to describe how “technology changes exponentially, but social, economic, and legal systems change incrementally.” Azeem Azhar uses yet another term for the same idea: “the exponential gap.”

In my own books, I’ve suggested that the pacing problem acts as “the great equalizer in debates over technological governance” because it forces governments to think and act differently when attempting to create governance frameworks for many emerging technologies. And that is truer for AI than almost any other field. My forthcoming book is all about this problem and how we’ll need to come up with constructive, bottom-up solutions to address AI governance in a more agile and iterative fashion if traditional mechanisms continue to fail.

9/ Most technical people think policy can’t matter for AI, because of the aforementioned unfathomably-behind nature of most governments. A surprisingly large % of people who think this also think this isn’t a problem.

13/ Many technologists (including myself) are genuinely nervous about the pace of progress. It’s absolutely thrilling, but the fact it’s progressing at like 1000X the rate of gov capacity building is genuine nightmare fuel.

Clark’s 9th and 13th tweets again bring us back to the pacing problem again, but now with an added twist: “Technical people” just don’t place much stock in the ability of policymakers to understand AI/ML, or they just don’t think they’ll be able to keep up with that change. Again, as I noted above, I think they have a point because the pacing problem is a very legitimate issue for policymakers today. Clark has actually explained this nicely in an interesting paper with Gillian K. Hadfield, in which they noted how, “Regulatory strategies developed in the public sector operate on a time scale that is much slower than AI progress, and governments have limited public funds for investing in the regulatory innovation to keep up with the complexity of AI’s evolution. AI also operates on a global scale that is misaligned with regulatory regimes organized on the basis of the nation state.”

This is why many technical people are skeptical about whether policymakers can keep up. But it doesn’t mean they can just ignore AI policy altogether because, even in the absence of formal regulatory policies, very legitimate concerns exist that demand some sort of governance responses. Again, this is the focus of my forthcoming book: Devising rough governance solutions and best practices for ethical AI development/use, especially when the pacing problem makes formal regulation harder. Something needs to fill that governance vacuum.

6/ Many AI policy teams in industry are constructed as basic the second line of brand defense after the public relations team. A huge % of policy work is based around reacting to perceived optics problems, rather than real problems.

Generally agree, but I will say that a lot of the best people I interact with on corporate AI policy teams have their hands and tongues tied to some degree, often because the firms want to avoid raising even greater fears among policymakers than already exists. Alternatively, they just want to steer clear of the potential for over-zealous trial attorneys to launch frivolous lawsuits. This means that we often hear more fluff from corporate PR teams and less from the many brilliant behind-the-scenes people doing the hard work on AI development / policy at many firms today. This sucks but, once again, we see this problem at work in many other sectors.

7/ Many of the problems in AI policy stem from the fact that economy-of-scale capitalism is, by nature, anti-democratic, and capex-intensive AI is therefore anti-democratic. No one really wants to admit this. It’s awkward to bring it up at parties (I am not fun at parties).

22/ AI policy is anti-democratic for the same reasons as large-scale AI being anti-democratic — companies have money, so they can build teams to turn up at meetings all the time and slowly move the overton window. It’s hard to do this if it’s not your dayjob.

I fear we’ve moved into the realm of capitalism-bashing now, which is fine if you can actually explain (a) what you mean by capitalism being “anti-democratic” and then (b) why capex-intensive AI in particular is anti-democratic. Are we supposed to use policy levers to limit large corporate R&D expenditures on AI/ML/robotics in the name of keeping the entire field small enough that it is somehow more democratic? That seems like a recipe for failure and an open invitation for China and other foreign powers to dominate the field if we hobble larger players by design.

10/ A surprisingly large amount of AI policy is illegible, because mostly the PR-friendly stuff gets published, and many of the smartest people working in AI policy circulate all their stuff privately (this is a weird dynamic and probably a quirk/departure from norm)

Yup, by my general critique applies to some extent here again. This is a problem in many other fields. I was at an emerging technology conference recently where someone made the exact same point about nanotechnology policy discussions.

11/ Many of the immediate problems of AI (e.g, bias) are so widely talked about because they’re at least somewhat tractable (you can make measures, you can assess, you can audit). Many of the longterm problems aren’t discussed because no one has a clue what to do about them.

61/ Most policymakers presume things exist which don’t actually exist — like the ability to measure or evaluate a system accurately for fairness. Regulations are being written where no technology today exists that can be used to enforce that regulation.

54/ To get stuff done in policy you have to be wildly specific. CERN for AI? Cute idea. Now tell me about precise funding mechanisms, agency ownership, plan for funding over long-term. If you don’t do the details, you don’t get stuff done.

Clark is exactly correct with these 3 tweets. While the discussion about handling AI bias is open to a lot of competing interpretations and solutions, at least we have a rough idea of what the problem entails. And AI audits and impact assessments can help us address it — although we should be careful about mandating them. But when we turn to longer-term issues, the risks in question are often so amorphous and filled with so many uncertainties that were left with a public dialogue about them more driven by dystopian sci-fi scenarios than any hard facts. And solutions are massively complicated and controversial when we start thinking through global efforts to control “superintelligence” (more on this below).

When we broaden policy discussions out to include values like “AI fairness,” the debate gets equally muddled. I mean, Plato and Aristotle couldn’t even get on the same page about what fairness meant, so we’re going to be hard-pressed to define/solve it when we talk about it in the context of algorithmic decisionmaking! This is why it is essential in these debates to try to move as quickly as possible away from abstract aspirational values and towards more concrete deliverables.

For example, although there are many trade-offs associated with algorithmic “explainability” or “transparency,” these are at least somewhat more sensible animating principles than “fairness.” To be clear, fairness matters, but you need to define that term when you use it and be far more concrete about exactly what you expect regulation to accomplish when using it. If you don’t, you’ll introduce massive uncertainty into the development process. If we want to get things done — both in the business of AI and the policy of AI — then greater specificity is essential. As Clark rightly concludes, “If you don’t do the details, you don’t get stuff done.”

14/ The default outcome of current AI policy trends in the West is we all get to live in Libertarian Snowcrash wonderland where a small number of companies rewire the world. Everyone can see this train coming along and can’t work out how to stop it.

50/ Policy is permissionless — companies drill employees to not talk to policymakers and only let those talks happen through gov affairs teams and choreographed meetings. This isn’t a law, it’s just brainwashing. Engineers should talk directly to policy people.

Well, earlier in his tweetstorm, we got a dose of capitalism bashing from Clark, so I suppose a bit of libertarian bashing was bound to follow! But seriously, I hang around a lot of libertarians and I don’t really know any that hope for a day when “a small number of companies rewire the world.” Most of them desire a vibrantly competitive, open marketplace for AI and other emerging technologies.

Regardless, I’m not sure that this necessarily describes the default outcome for AI policy because already have very hard time mapping out all the major players in the field and all the new ones popping up each year. I was just using this State of AI report chart below in a presentation I did. It’s a nice snapshot of the continuing innovation and competition we see in this space.

Of course, Clark’s broader point is probably that, regardless of how many innovators we have, they’re operate largely free of sectoral specific regulation by default. In my work on tech governance trends, I often refer to sectors or technologies that are “born free” (initially free of preemptive, precautionary regulations) versus others that are “born into captivity” (those immediately pigeonholed into an existing Analog Era regulatory regime/agency.) But the “born free” default is actually a good thing in most cases! It gives us a chance to let innovation blossom and then better determine what sort of governance responses (including administrative regulation) may be needed once we have a better indication of what the actual problems are.

15/ Like 95% of the immediate problems of AI policy are just “who has power under capitalism”, and you literally can’t do anything about it. AI costs money. Companies have money. Therefore companies build AI. Most talk about democratization is PR-friendly bullshit that ignores this.

I’m glad that Clark highlights how significant private sector resources will be needed to scale up AI, but we’re back to a bit of capitalism-bashing again with this talk of how private AI development is somehow inherently anti-democratic. He really needs to unpack that in more detail and move away from broad generalizations. He should also recognize that just because we do not currently have an over-arching AI regulatory regime in the United States, a great many other legal instruments (consumer protection policies, anti-fraud laws, torts and class actions, etc) and targeted laws/agencies exist that do regulate AI/ML in a general sense. The Federal Trade Commission is already ramping up it AI-focused enforcement efforts, and many other agencies are already active in this space.

Therefore, if a developer’s AI tool or application harms someone in a material way today, a great many remedies already exist to address it. Yes, perhaps some additional remedies might be needed, but let’s not pretend that we live in some sort of anarchic system today because we absolutely do not.

16/ Some companies deliberately keep their AI policy teams AWAY from engineers. I regularly get emails from engineers at $bigtech asking me to INTRO THEM to their own policy teams, or give them advice on how to raise policy issues with them.

17/ Sometimes, bigtech companies seem to go completely batshit about some AI policy issue, and 90% of the time it’s because some internal group has figured out a way to run an internal successful political campaign and the resulting policy moves are about hiring retention.

I don’t know enough on this matter to comment on these two tweets, but I suspect at least the first is a valid complaint from what I’ve seen happen in tech companies in the past. But, yet again, it’s probably been an issue in many other sectors and companies before AI came along. Lawyers vs. engineers is actually a very old battle that has played out in countless industries and firms before AI came along. There’s nothing new here.

18/ Some people who work on frontier AI policy think a legitimate goal of AI policy should be to ensure governments (especially US government) has almost no understanding of rate of progress at the frontier, thinking it safer for companies to rambo this solo (I disagree with this).

We’re back to the pacing problem. See previous discussions for that. But I think Clark goes a bit too far in suggesting that leading AI people are out to just run roughshod over policymakers and “rambo” things through. Hard for me to prove him wrong, but in my experience with related tech policy issues, even if many companies and developers would prefer not to be aggressively regulated, a great many of them are willing (even eager) to offer advice to policymakers and hear out their concerns. I can think of a few exceptions, but usually they came around to engaging with policymakers at some point.

For example, when Steve Jobs was alive, he famously used to tell his people to mostly ignore Washington, and most other governments for that matter. For a time, Jobs and Apple really did just try to “rambo” everything through without much consultation with lawmakers or regulators. But all that changed after he passed away and Apple came to realize that they would need to engage with policymakers at all levels. Of course, examples of the rambo strategy are out there. Elon Musk and Tesla are the primary example today. We’ll see how that ends for them. I’ve suggested that it won’t end well in this recent essay. For a time, Uber and 23andMe tried Rambo strategies, but they eventually came around to working with policymakers more actively. (I discuss all these examples in far more detail in my recent book, Evasive Entrepreneurs and the Future of Governance.)

19/ It’s functionally impossible to talk about the weird (and legitimate) problems of AI alignment in public/broad forums (e.g, this twitter thread). It is like signing up to be pelted with rotten vegetables, or called a bigot. This makes it hard to discuss these issues in public.

Ha, yes, this is 100% correct and very well put!

20/ AI really is going to change the world. Things are going to get 100–1000X cheaper and more efficient. This is mostly great. However, historically, when you make stuff 100X-1000X cheaper, you upend the geopolitical order. This time probably won’t be different.

Again — and as Clark implicitly acknowledges — this is not entirely unique to AI. Go through the history of previous general-purpose technologies from the printing press through to the Internet and you can make a strong case that they had a profound impact on geopolitical order. But it certainly wasn’t all bad! In fact, making information radically cheaper helped democratize the creation and diffusion of knowledge and it gave average people the chance to have more of a voice and push back against larger and more powerful forces. Of course, the same technologies had many destabilizing downsides that are undeniable. There are trade-offs, but on net, making stuff 100x to 1000x cheaper is generally beneficial for civilization, even with the short-term disruptions it causes.

23/ Lots of the seemingly most robust solutions for reducing AI risk require the following things to happen: full information sharing on capabilities between US and China and full monitoring of software being run on all computers everywhere all the time. Pretty hard to do!

It’s not just that a global surveillance regime for AI/ML research and development would be “pretty hard to do.” It would also be potentially horrific in practice! Nick Bostrom has already sketched out a global surveillance regime for AI science in his widely-read “vulnerable world hypothesis” paper. A mass surveillance apparatus would not necessarily guarantee workable containment solutions to the sort of disasters that Bostrom fears, but it certainly would open the door to a different type of disaster in the form of highly repressive state controls on communications, individual movement, and other activities. “Global totalitarianism is its own existential risk,” notes Maxwell Tabarrok in a recent response to Bostrom’s proposal.

Moreover, who is going to set up such a global surveillance regime, anyway? The United Nations? Hell, they’ve recently let North Korea take over as head of the UN’s Conference on Disarmament!! I’ve responded to Bostrom’s proposal in far more detail in this chapter on “Existential Risks & Global Governance Issues around AI & Robotics,” which will appear in my forthcoming book on AI governance. (Note: That link sends you to Ver. 1.4 of the article. The chapter continues to evolve and grow.)

24/ It’s likely that companies are one of the most effective ways to build decent AI systems — companies have money, can move quickly, and have fewer stakeholders than governments. This is a societal failing and many problems in AI deployment stem from this basic fact.

25/ Most technologist feel like they can do anything wrt AI because governments (in West) have shown pretty much zero interest in regulating AI, beyond punishing infractions in a small number of products. Many orgs do skeezy shit under the radar and gamble no one will notice.

I disagree with the thrust of both these points. It suggests that most AI technologists are little more that Bond villains hell-bent on destroying humanity by developing “skeezy shit under the radar.” It’s a cartoonish and unsubstantiated claim. In reality, the vast majority of AI developers — and Clark certainly knows this because he works closely with more of them than I do — are not out to destroy the world but rather to making it a better place. Are there bad actors out there in AI land? Of course there are because there are bad actors at work in many different contexts — sometimes including inside government processes!

Speaking of which, Clark seems to have a highly romantic view of traditional government processes, imaging them to be remarkable democratic and inclusive of relevant stakeholders. In reality, we know this often isn’t the case. I won’t go off into a rant about regulatory capture or the shortcomings of many political systems, but Clark needs to cut back on the “societal failing” rhetoric as it pertains to the currently state of AI governance. We can find and address bad actors using many existing mechanisms and remedies. (See my answer to #15 above for more on that.)

Meanwhile, while governments struggle to adjust, a massive array of “soft law” governance mechanisms for AI continue to develop. Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Stanford University’s Artificial Intelligence Index Report 2022, which Clark co-directed, noted that one of the most important trends in the field was, “the rise of AI ethics everywhere.” The report summarized the explosive growth of ethical frameworks and guidelines for AI that has been occurring throughout academia and industry. Last year, a team of Arizona State University legal scholars published the most comprehensive survey of soft law efforts for AI to date. They analyzed 634 soft law AI programs that were formulated between 2016–2019. 36% of these efforts were initiated by governments, with the others being led by non-profits or private sector bodies.

These soft law frameworks have already been hugely important in shaping ethical norms for AI development. And, importantly, leading figures in the field of AI/ML — including a huge array of private developers — have been working actively to advance these efforts. This is not “societal failing” but rather the exact opposite: It represents constructive and meaningful steps toward “baking in” ethical best practices at a global scale. We should applaud these efforts.

26/ Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group wrt some issues.

I completely agree, and I’ll just add one additional point: It doesn’t help that both supporters and critics of powerful AGI sometimes play up predictions of AI superintelligence and speak in fatalistic terms about the coming of a “singularity,” or moment in the future when machine intelligence surpasses that of humans.

For example, flamboyantly titled books by AGI boosters like Ray Kurzweil (The Singularity Is Near) and detractors like Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) reflect an air of inevitability about machines coming to possess greater intelligence than humans, for better or worse. In other words, on both extremes of the AGI debate, we see (a) extreme technological deterministic thinking at work and (b) that techno-determinism tends to be expressed in highly provocative ways. This tends to suck all the oxygen out of the room when reasonable people are trying to discuss the actual capabilities of modern AI systems, which do not come anywhere close to what those voices suggest.

27/ The concept of ‘information hazards’ regularly ties up some of the smartest people and causes them to become extremely unproductive and afraid to talk or think about certain ideas. It’s a bit of a mind virus.

28/ At the same time, there are certain insights which can seem really frightening and may actually be genuine information hazards, and it’s very hard to understand when you’re being appropriately paranoid, and when you’re being crazy (see above).

Yes, fair points. All we can do is try our hardest to do reasonable cost-benefit analysis of these things but acknowledge that will must cope with a profound degree of uncertainty with regards to many scenarios. (More on this below).

29/ It’s very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms. V alienating.

65/ Norms and best practices only work on people who have an incentive to adopt them (e.g, companies to minimize PR/policy risks). The hard problem is coming up with enforcement mechanisms that can influence the people who don’t care about norms and best practices.

A good point, but building on my response to #25 above, steps are being taken to bring various members of the AI community together more regularly and formally to discuss risks and harms — including solutions. Let me be more specific.

In my forthcoming book, I highlight some of the amazing work that’s already been done by professional associations like the Association of Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (ISO), and UL. These organizations have labored to create detailed international standards for AI and ML development and they possess enormous sway in professional circles as almost all the world’s leading technology companies and their employees have some sort of membership in these professional organizations, or at least work closely with them to create international standards in various technology fields. But more could be done to help these and other stakeholders coordinate AI governance efforts at a global scale.

Toward that end, Gary Marchant and Wendell Wallach have proposed the formation of what they call governance coordinating committees (GCCs) to address this problem. GCCs would help coordinate technological governance efforts among governments, industry, civil society organizations, and other interested stakeholders in fast-moving emerging technology sectors, including AI and robotics. Because “no single entity is capable of fully governing any of these multifaceted and rapidly developing fields and the innovative tools and techniques they produce,” they suggest that GCCs could act as a sort of “issue manager” or “orchestra conductor” that would “attempt to harmonize and integrate the various governance approaches that have been implemented or proposed.” They have also called for the formation of an International Congress for the Governance of AI as “a first step in multistakeholder engagement over the challenges arising from these new technological fields.” This could be a voluntary, multilateral, consensus-driving process that might represent a good step toward addressing some of the concerns that Clark raises.

30/ Most people working on AI massively discount how big of a deal human culture is for the tech development story. They are aware the world is full of growing economic inequality, yet are very surprised when people don’t welcome new inequality-increasing capabilities with joy.

Is this really true? I’m not so sure. First, I’ll ignore the implicit generalization that AI technologies entail “inequality-increasing capabilities” when a good case could be made that they will lead to the exact opposite result, and are already doing so in many fields. But I’m not so sure the broader claim is accurate. “Most people” working on AI are constantly talking about how human culture affects tech development and could influence policy. Many of the AI governance frameworks that I mentioned above are heavily focused on address what IEEE calls “ethically-aligned design” and which includes a wide range of values and human rights concerns. ACM, ISO, and many other orgs have followed suit and developed frameworks to address privacy, security, safety, and discrimination concerns. Years’ worth of research and writing went into these efforts. I think that signifies the level of interest in getting out ahead of these issues.

31/ People don’t take guillotines seriously. Historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets actually, for real, killed. People feel like this can’t happen anymore.

So, in this analogy, are AI systems the guillotines hanging above our necks with a small mob ready to let them fall? Is that what were supposed to take away from this?? If so, we’re back to Bond bad guy characterizations that aren’t helpful. And let’s be clear that the most frightening and effective killing machines in history have been used by large-scale government law enforcement and military bodies/operations, not by private innovators. That being said, there are always valid concerns about a small group of people making major decisions that impact society writ large.

32/ IP and antitrust laws actively disincentivize companies from coordinating on socially-useful joint projects. The system we’re in has counter-incentives for cooperation.

A very good point worthy of further exploration. The remedy to this could be some sort of safe harbor carve-out from potential IP or antitrust-related liability, but I don’t have enough experience in either field to craft that language. But I do know from my past research in both areas that we’ve used such approaches in the past to address the sort of problems that Clark raises.

34/ Many people developing advanced AI systems feel they’re in a race with one another. Half of these people are desperately trying to change the race dynamics to stop the race. Some people are just privately trying to win.

35/ In AI, like in any field, most of the people who hold power are people who have been very good at winning a bunch of races. It’s hard for these people to not want to race and they privately think they should win the race.

I’m not sure this is entirely bad. I mean, we want innovators competing aggressively, right? The question is, what races are not worth encouraging? If everyone is racing to build a doomsday device to wipe out humanity, it’s easy to say STOP! But what does that mean for dual-use, general-purpose technologies like AI? And we already see this unfolding today over the issue of “killer robots.” Certainly, we do not want lethal autonomous weapons system that pose a risk to the future of humanity.

On the other hand, we cannot put our heads in the sand and pretend that some of our potential adversaries won’t be pursing some of those capabilities. It would be irresponsible for America policymakers to call for US-based developers to completely abandon all research in this space while other countries are advancing their capabilities here. In other words, we are in a technological race, and we must keep running in that race. Once again, we’ve been here before with countless other technologies, so my general critique applies. This is all just a replay of the debates we’ve already had about nuclear weapons — which remain the far greater existential threat to humanity today than AI. (I discuss these trade-offs in much greater detail in my draft chapter on “Existential Risks & Global Governance Issues around AI & Robotics.”)

46/ If you have access to decent compute, then you get to see the sorts of models that will be everywhere in 3–5 years, and this gives you a crazy information asymmetry advantage relative to everyone without a big computer.

58/ Richard Sutton’s The Bitter Lesson is one of the best articulations of why huge chunks of research are destined to be irrelevant as a consequence of scale. This makes people super mad, but also seems like a real phenomenon.

I think Clark makes some interesting points in tweets #46 and #58, but I will just leave it to others to judge for themselves after they read the pushback to Sutton’s thesis. This excellent article by Kevin Vu offers a nice overview of the different dimensions of that debate. I’d love to learn more about these issues if other could help direct me to other relevant literature.

63/ Code models are going to change some of the game theory of cyber offense/defense dynamics and the capabilities are going to cause real problems.

73/ ‘street-level AI ‘ has already begun to change the nature of military conflict. It started in ~2015-ish, but in Ukraine this year we saw troops pair $10–20k drones with 1950s grenades and 3D-printed aero-fins, targeted via vision models trained to spot soldiers in camo.

74/ Seriously, the above point is worth belaboring — for certain types.of conflict cheap robots and a bit of AI has drastically reduced cost and increased time-efficiency. My 5min $20k drone and $500 grenade and $20 fin and 2-person team destroy your $2–4m tank and associated people

75/ Malware is bad now but will be extremely bad in the future due to intersection of RL + code models + ransomware economic incentives. That train is probably 1–2 years away based on lag of open source replication of existing private models, but it’s on the tracks.

76/ Deepfakes have mostly been a porn thing, but we’ve had a few cases in politics (eg Gabon a few years ago, and some stuff in Ukraine). Deepfakes on smartphones is gonna be a thing soon — models appear and get miniaturized and open-sourced then made easy to use. Proliferation++

In tweets 73–76 as well as #63, Clark highlights many legitimate governance problems that deserves a lot more attention. A legal literature has already started developing around AI malware, but it’s primarily focused on remedies for individuals who have had their privacy/reputation/incomes violated in some fashion. But the national security / law enforcement issues here are a different can of worms and we don’t have nearly as much good thinking going into countering those problems. However, just recently I saw that the Congressional Research Service put out an updated report on “Deep Fakes and National Security” that is worth reading. It’s short but concludes with a good list of policy questions for Congress to consider. The biggest problem here is the one that Clark alludes to: Open sourced deepfakes and other types of algorithmic attacks are going to proliferate and become a very difficult problem to address.

77/ Surveillance has already been radically changed by AI. Even public open source models (e.g YOLO) are perfectly usable, downloadable, and free. This means you can track people. But that’s not the interesting part…

78/ If you see a person a bit you can learn to identify them _even with some outfit change_ and then you can pick them up from other cameras in diff lighting and angles. The models used for this stuff are getting better at a n-2yr basis vs big models due to needing 30fps+ inference.

79/ Trivia: the original developer of YOLO ceased developing it after V3 [2018: arxiv.org/abs/1804.02767] due to reasons outlined below — — yolo (now v7) has subsequently been pushed forward by Taiwan/Russia/China. Some things are kind of locked-in… [Amazing paper overall btw!]

Agree on the general points Clark makes in these concluding 3 tweets, but I have not yet read the paper he cites. The broader point here tracks my previous response (as well as my response to Tweet #20): Open sourced and inexpensive AI tools will make these developments harder to police/remedy. We know from our experience with the Internet more generally that while democratizing access to generative technologies can have many wonderful benefits, it’s the troubling corner cases that cause so much policy angst. Ubiquitous mobile connectivity and social platforms, for example, have been a tremendous societal blessing, but cheaper and more accessible embedded features and functions (camera, sensors, instantaneous sharing options) mean that a handful of bad actors can cause an inordinate amount of trouble on social sharing sites. This is the fundamental governance challenge for the Internet and social media today, and AI capabilities will exacerbate these problems. There are no governance quick-fixes here. For further discussion on this, see my recent AEI study, “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.”

In closing, I again want to thank Jack Clark for his interesting and informative tweetstorm. I hope he writes it up as a longer paper so it’s easier for me to cite than 79 separate tweets!

____________________

Related Work from Adam Thierer on AI & Robotics

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer