OpenAtom 3: AI is a National Imperative

Kevin O'Toole
AI: Purpose Driven Policy
8 min readMay 7, 2024

Democracies Must Lead

In facing the question of AI, we must get our “why” right. If we understand “why” we must lean into AI as a society, that will guide us to the right decision on “what” we should do and “how” we should approach this. The Why matters.

Quite simply: Leading on AI is critical to the safety and continued prosperity of secular democracy and capitalism.

If we had our druthers, nearly everyone would wish the nuclear era away. Scientific breakthroughs notwithstanding, the nuclear era has brought far greater curses than blessings. Each day humanity rolls the dice to see if someone crazy or malevolent will trigger a nuclear exchange with its uncertain but no doubt terrible and escalating outcomes.

We can only be thankful that when the nuclear age was born it was secular democracies that had the upper hand. The democracies set the pace and shaped not just the technology but global policy approaches — indeed the morality — of managing the nuclear genie. Those shaping nuclear policies were imperfect people and they made many poor choices as they felt their way through the nuclear darkness.

From the question of whether Hiroshima was a moral choice; to considering a nuclear first strike on the Soviet Union; to the Reagan build-up; to failed attempts to contain proliferation; to ineffective development of nuclear power as a green alternative,

the West has been terribly and tragically imperfect. There is an old joke that God watches out for children, drunks and the United States of America. Amen.

For all the imperfections, we cannot lose sight of the fact that the world would have walked a far darker path if the Nazis, Imperial Japan or the Soviet Union had the upper hand in defining nuclear technology and the approach to using it. History confirms this and current events in Ukraine and elsewhere remind us of that truth.

As much as we would wish away the nuclear age, it is not possible, and it never was.

Nor is it possible to wish away the risks and dangers of the coming AI age. It cannot be stopped. In truth, it cannot really be slowed. It’s here. The question is about our response as a society.

AI Optimism is Imperative

It’s important to note that I am not an AI Pessimist. Properly managed, AI will be a net gain for society. The fission comparison naturally — and importantly — leads one to consider military and tragic outcomes.

It’s easy to look back on the ripples of the Trinity explosion and think the story is complete and fraught. But the ripples continue and fission’s cousin is fusion. The ripples may well lead to fusion power and a cleaner future with limitless energy supplies for everyone in the world. Where nuclear fission power has always been a high-risk endeavor, scalable nuclear fusion power may be the answer to many of the world’s woes and usher in the next era of global prosperity. One can hope.

Ironically, AI appears poised to accelerate the arrival of nuclear fusion.

Similarly, the AI ripples are not all negative and scary. We will not have to wait a century for AI goodness. We are already seeing it. AI will — and already is — bringing advances in medicine, science and industry. It will further democratize education and unleash productivity. Jobs will be displaced but even more will be created. Services will improve. Downtime due to service outages will be reduced. Society will be more efficient and agile with market driven economies re-channeling that efficiency into new growth.

Development of AI is an economic and human imperative. We just have to do it right.

The Courage to Lead

Far from stopping AI, it must be seen as a national imperative to lead the way in AI. On every level and in every domain. It is most important that we lead in the very places that make us most uncomfortable. If we do not, others of less noble intentions will.

The West, and the United States in particular, are indispensable in this conversation.

Looking back at Britain’s attempts to appease Hitler through disarmament and Chamberlain’s feckless deal at Munich, we now understand that it served only to embolden the Nazis. That well-intentioned weakness invited war. France’s fixation on the last war — brought to life in the Maginot line — kept them from facing modern realities. The striking sacrifices of the British and Russian people and the safe isolation of the US industrial base is all that allowed the allies to prevail against Nazi Germany.

The West must set the pace both technically and morally in the AI age.

It must start with a commitment to AI excellence that is at least equal to the commitment to nuclear leadership. Though fitful, the US maintained its nerve throughout the Cold War to ensure that we had the best weapons. The best submarines. The best detection systems. The best people. And the nuclear scale to ensure the lunacy of MAD worked. A parallel, civilian nuclear regulatory regime not only kept the country safe, but helped define the global standards for nuclear safety.

We look back now and shake our heads about un-necessary panics over the “missile gap” and the “bomber gap.” We wonder at the self-serving nature of the defense industrial base. We’re unsure whether the nuclear overkill investments were rational or lunacy. We ask whether Reagan’s SDI initiative was brilliant or a boondoggle.

I would offer that it’s all totally irrelevant. In the long arc of the nuclear age those are minor perturbations and all inspired needed urgency to be the best. We are far better off to build an AI lead than to wring hands over whether we are pushing too hard.

In building that lead our efforts must encompass governance and morality as much — or perhaps more than — it focuses on technical supremacy. In the later days of the nuclear age, the West found the courage to say that we would not be the first to use nuclear weapons. Of course, we still could use them first, but it was an important statement of national and moral policy.

Let us hope that we find similar moral positions on AI nearer to the start of this revolution.

Current AI Governance is Unacceptable

It is perhaps now useful to revisit the question at the beginning of the OpenAtom series:

How should the world have responded to nuclear development if, in the mid-1930s, the vast majority of nuclear innovation and investment was being driven without coherent government engagement?

Should we have let it continue unabated and driven towards unknown ends?

I would suggest not.

It is a fundamentally good thing that Boeing was not allowed to develop its own nuclear weapon. It is a good thing that GE was not allowed to randomly dot nuclear power plants about the countryside. It is a good thing no nuclear tests were conducted in the middle of Iowa farmlands simply because that was the cheapest place to do it.

It is a good thing that while profit motives delivered the best innovation to the country, they were not the driving force behind the moral and policy decisions that shaped the nuclear age. Had OpenAtom and related development efforts existed in the 1930s, one hopes that the government would have moved swiftly and decisively to harness and guide the situation.

The world of OpenAtom did not exist, but the world of OpenAI does. And it demands aggressive, far-reaching, and nationally coordinated governance and investment. Once again, this is not about yelling “stop!” but rather about ensuring that national policy, strategic realism, and western morality guide AI development rather than simple profit motives.

An Uncoordinated Mess

Regulation in the United States is always a complicated beast. We have 50 states and a complicated federal structure with two legislatures and myriad, competing executive departments. This is further complicated by politics and recourse to a court system that is inconsistent in its engagement with complex topics. Watching a US Senator demand to know which Google employees selected the responses to various searches does not invite confidence.

And, of course, the government is nearly always behind the curve on technology. At least the civilian side of the government is behind the curve.

One must applaud that various government entities are not asleep on AI. The Biden administration has issued its AI policies. It is flexing the laws it does have, most notably the Defense Production Act, to begin compelling specific regulatory behaviors. Congress is considering multiple pieces of legislation. States and even cities are advancing AI law focused on privacy, bias prevention and other protections. Many of the efforts are striking first and hard on critical morality issues.

But the situation is totally unacceptable. It is uncoordinated and creates conflicting policy guidelines which invite non-compliance. It lacks the force of law and the cleanliness of legal and regulatory boundaries. City level AI law will be about as useful as Cleveland Heights OH once proclaiming itself a “Nuclear free zone” and posting signage to that effect.

Perhaps most critically, our government AI efforts lack a coherent, unified statement of purpose from an administration which stands up and galvanizes the country. Comparisons to Kennedy’s moon mission are overused and cliche but perhaps appropriate to the moment. Kennedy didn’t make a passing reference to going to the moon or appoint a handful of agencies to make some progress. His administration made it a strategic priority and rallied the nation. At one point more than 4% of US GDP was being channeled into the space program.

This is that sort of moment.

The governance vacuum is giving big tech companies room to run and they are already racing for the fig leaf of “voluntary cooperation” and “self-governance.”

Can one imagine a nuclear governance regimen rooted in “we promise to be safe?”

Voluntary cooperation is not sufficient and passive governance will not get the country (and the world) where it needs to be with AI. Relying on existing law, creatively applying the Defense Production Act and working within the confines of executive orders are not the vehicles for creating national policy. As we know from other topics, these tools are by their nature temporary and can be wiped away or left to wither by subsequent administrations.

National Policy must be led by the executive branch and made permanent by Congress. The wisdom of the founders was to create a system where it was hard for the government to do new things. This is a strength. It tends to keep the government on task and, in those moments when it does arrive at policy, allows the full strength of the nation to be brought behind a unified set of objectives. It is what allows major topics to remain the focus of the nation not for years but for generations.

It is time to wield that strength. To do the hard work of aligning on national policy and bringing the strength and vitality of the nation to bear on these AI priorities.

As with the nuclear age, the dynamism of capitalism is our core strength. It is what allowed the West to out-run Soviet Communism. It is the silent advantage that even now begins to put distance between the US and Xi’s increasingly totalitarian China.

But capitalism is not bred for moral decision making or crafting policy. It is the nation’s muscle, not the nation’s conscience. We must set our brains to work designing and defining our national objectives and philosophy as it relates to AI. We must end the era of OpenAtom and get on with creating the policies that can help navigate this new era and build structures that will endure for generations.

--

--

Kevin O'Toole
AI: Purpose Driven Policy

I write about the need to develop national purpose and governance related to Artificial Intelligence.