OpenAtom 4: Charting Our National AI Course

Kevin O'Toole
AI: Purpose Driven Policy
5 min readMay 14, 2024

Opportunities and Dangers to Rally a Nation

National policy is often an exercise in duality. We challenge the Federal Reserve to keep inflation low while maximizing employment. We ask the FBI to keep us safe without trampling our rights. We ask the NSA to hoover up all of the world’s communications but not our own. We ask the FDA to facilitate drug development without compromising safety.

AI will be no different. Indeed, this duality will be even more extreme.

In the nuclear era, the country did a great job of managing nuclear risk but a poor job of unleashing nuclear opportunities. Most notably, the country was ineffective in developing nuclear power. When I shared the premise of this work with a friend, he said the world may be a better place if 10M centrifuges had been spinning in private hands since the 1950s. I don’t think he’s right, but his point is well taken. Since 1948, the US Navy has built 210 nuclear powered ships with over 500 reactors. There are currently only 54 nuclear power plants in the country and the government has still not decided how to handle civilian nuclear waste.

Successfully managing AI to its full potential will require a much bigger focus on achieving success rather than just containing risks.

Defining Success

President Biden’s expansive AI Executive Order released on October 30, 2023 provides a litany of good objectives and desires. It refers to “leadership” several times and seeks to flex existing mechanisms for research, etc. to move AI development along. It plants important flags related to safety and industry responsibility. It is worth reading. You can find the summary fact sheet here and the full text of the order here.

Unfortunately, it is crafted in the form of a modern State of the Union address. A litany of proposals and plans but no coherent objective or national aspiration. Nowhere in the full document do we find the words “first” or “fastest.” The word “best” is used 25 times, though only in the context of formulating “best practices.” Best practices are laudable but they are found by actually being the best at something … and the executive order doesn’t say what that something is.

The order is in the language of regulation rather than the language of national purpose. It spurs an impressive array of government entities into some form of action, but it does not galvanize the nation or create something that the typical American citizen might even take notice of. It lacks any specific goal that the nation can rally behind.

Perhaps something like this would be helpful:

“The United States will lead the world in Artificial Intelligence across both civilian and military domains.

The United States will be the first country to deploy a national AI cyber protection capability that covers both government and civilian infrastructures by 2030.

The United States will provide the fastest, medically specialized AI computing and modeling infrastructure in the world to cut cancer mortality 50% by 2035.

To meet our objectives, we will ensure that 100 septillion computing cycles are available to every university by 2027. The NSF will provide annual funding for 1M students in mathematics, computer engineering, data sciences and AI ethics. We will link corporate R&D support with a national co-op program that quickly moves these new students into productive employment.”

Smarter people can develop better statements of purpose and actions, but I hope it makes the point. We need to tell the country what we’re seeking to achieve and the investments we will put behind it. Absent that, we’re asking the nation to engage in an incredibly complex topic with no clear reason as to why.

Five areas, in particular, seem ready to deliver meaningful improvements and should be pursued with vigor:

  1. Cybersecurity
  2. Education
  3. Medicine
  4. Energy Efficiency
  5. Military Applications

The Risks to Be Managed

The Center for AI Safety has been created to help identify and prevent societal level AI issues. This group of researchers and advocates are doing important work attempting to create a more secure AI future. They have written an excellent paper on the major risks of AI that should be read by anyone interested in this topic. (https://arxiv.org/pdf/2306.12001.pdf)

Broadly they identify four risks:

  1. Malicious Use
  2. AI Race
  3. Organizational Risks
  4. Rogue AIs

To their point on malicious use, the nation must also establish plans for foreign, deliberate AI intrusions into our critical infrastructure. All military applications — and perhaps most of all those systems engaged in scenario planning with access to large data stores — must be protected. Civilian infrastructure must be considered. Not just utility infrastructure but those assets also that can be used create industrial or biochemical tragedies. As we’ve seen in the recent past, a train derailment in a populated area can have severe consequences.

One could say that I am calling for an AI arms race. That will happen regardless. The science is too profound. The promise and perils are too high for anything but a race. We should be the first to have an outstretched hand — by all means let’s create an AI version of the International Space Station — but to say that race can be avoided is wishful thinking. With the race already starting our only viable goal is to win the race.

The intersection of Rogue AIs and Organization Risk deserves particular attention.

AI systems have demonstrated the ability to develop emergent goals. This is either due to goal drift or the AI setting an intermediate goal in pursuit of its assigned goal. AI systems have also demonstrated that they can learn the power of deception as a tactic to achieve a goal. Emergent goals plus deception results in a world where for the first time we must consider the possibility that our systems are actively lying to us.

I mentioned the risks associated with complexity in OpenAtom 2. The aggregate complexity of this situation — the sheer scale, the young science, the pace, the money, and the organizational dynamics — combined with AI’s ability to write its own code in pursuit of its own goals, make it nearly inevitable that a rogue AI will happen.

The country must establish a plan and approach for what it will do when a rogue AI takes root. As in so many things, the “plan” for this will probably be useless when a real situation arises but the “planning” will be invaluable. That a rogue AI must never gain access to a Weapon of Mass Destruction is evident on its face. Significant “what if” planning and rules to prevent this from happening must be developed. Cue the game theorists and researchers. It is time to figure out how we will play out moves against a rogue AI.

It is important that we articulate these risks and explain to the country what is necessary to address these issues.

A Moment For Leadership

A well formed set of priorities, highlighting both the societal potential and unique perils of the coming AI revolution, is necessary to engage the country’s minds. This engagement is critical if we are to seize the moment and shape the global evolution of AI.

Nothing should be construed to say that the US is or should be “in charge.” The world is more multipolar than during the cold war or the recent past. Lots of countries are drafting AI legislation and making their own bold AI investments. We should engage with them and learn from their efforts.

But the reality is that the technical, application and governance leaders will define how AI actually plays out in history. As we saw with the nuclear age, the world was far better off having secular democracies set the standard for nuclear behavior than totalitarian states.

--

--

Kevin O'Toole
AI: Purpose Driven Policy

I write about the need to develop national purpose and governance related to Artificial Intelligence.