OpenAtom 5: National AI and Cybersafety Agency

Kevin O'Toole
AI: Purpose Driven Policy
4 min readMay 21, 2024

It’s Time to Get Organized

If AI has implications as broad and fundamental as nuclear fission, then we must decisively intervene to harness and guide its development. The obvious question is “How?” How do we choose just ends? How do we stare down the harsh realities and risks we face without being guided by fear? How do we ensure that we unlock rather than thwart the civilian and economic potential of these capabilities?

The broad scope of AI cuts across the government’s national security, foreign policy, and domestic responsibilities. This issue is already apparent as competing departments have issued AI guidance and rules.

Biden’s AI executive order creates a “White House AI Council.” The AI Council is chaired by “The Assistant to the President and Deputy Chief of Staff for Policy” and names 30 defined council members with the option to add more. All of the members are very senior — the cast includes cabinet secretaries and the Chairman of the Joint Chiefs — but the list serves mainly to highlight the urgent need for better organization. It is not possible that this forum can command the focus needed to drive substantive AI change. Indeed, it risks creating the illusion of coherent government action and mistaking bureaucratic activity for national progress.

This committee-style approach will, at best, result in ham-fisted regulation rather than true leadership. At worst, it will create confusion that provides cover for large companies to do as they please.

Copy What Works

The FAA and NTSB may offer a path for tackling AI. For whatever recent issues may be going on with Boeing, America’s flight safety apparatus is amazingly effective. It is highly trusted and does an exceptionally good job of balancing innovation with safety. It operates with the force of law and a staff capable of engaging in issues both large and small. The NTSB engages in any plane incident to recreate the situation and assess what learnings may be gleaned. In partnership with the FAA, they can propagate rules including the immediate grounding of entire fleets of aircraft if they identify a systemic risk.

A college classmate of mine died tragically in a private plane crash. Two people in a small plane went down in a NJ woodland shortly after take-off. In most industries, such a problem would cause no notice and certainly no government involvement. That is not the case in aviation. The NTSB conducted an investigation including assessing the workings of the recovered engine in search of a mechanical failure that may have caused the crash. (None was found.)

There is no other industry that operates with that level of public/private partnership and has so successfully delivered a high safety record. This partnership doesn’t prevent airplane innovation nor place undue burdens on those seeking to become pilots. It works and it works very well.

The other reality is that AI is going to be so intertwined with cybersecurity and privacy so as to render the distinction moot. The country’s on-going cybersecurity challenges are substantial enough to warrant heavy government engagement, so it makes sense to take on both of these issues in one motion

An integrated agency coupled with the disciplines one finds in the FAA/NTSB should be our model when building our nation’s AI governance. The FAA & NTSB both report into the department of transportation, which is logical. The cross-department requirements of AI suggest the need for a stand-alone department which acts as a unifier across the government. The Department of Homeland Security or NASA are better examples in this regard.

Meet NACA

To that end, we should build “The National AI and Cybersafety Agency” or NACA. NACA would work with Congress, the military, federal agencies, and the private sector to drive radical improvements in the nation’s AI governance and capabilities. It should be treated with the same level of Congressional oversight as any other top agency and, like the Department of Homeland Security, would need to incorporate both classified and unclassified oversight.

NACA must be a catalyst for delivering a powerful, strategically coherent and ethically sound AI industry. To do so, it must embrace the duality of a safety and innovation mandate. It cannot be in the business of thwarting AI development. We must also take lessons from places the government was less successful in its regulatory and development efforts.

The Nuclear Regulatory Commission must be part of this comparison. As noted earlier, the country has a paltry 54 nuclear power plants providing just 18% of its power needs. It still does not have a plan for managing nuclear waste. This is not just due to the complexity of the topic. The “R” gives the problem away. The agency fundamentally saw its job as regulation rather than advancement. Given the risks of fission, if they were going to fail that was the right way to fail. But the country has also failed to develop the most carbon-neutral energy source currently available.

NASA is another useful reference point. While the agency became a political plaything that was largely sustained on the back of cold war space requirements, its record of balancing innovation and risk is much better. Every member of Gen X can tell you where they were when the Challenger exploded. The later loss of Columbia was a regrettable tragedy. But NASA banked learnings from their losses and provided on-going innovation on a very small budget. Still, the nation is seeing the power of private industry with Space X and innovation like Blue Origin. NASA would have been better to engage bigger private muscles rather than only outsourcing component and system development.

More in line with the FAA/NTSB than the NASA model, NACA must be an enabler of safe, private AI development rather than the government undertaking the development and operation of AI platforms in the same way NASA ran space exploration for 50 years.

To meet the moment created by the AI discontinuity, the government must get properly organized. This new organization, NACA, must be imbued with significant authority that is described in OpenAtom 6: Muscular AI Governance.

--

--

Kevin O'Toole
AI: Purpose Driven Policy

I write about the need to develop national purpose and governance related to Artificial Intelligence.