OpenAtom 6: Muscular AI Governance

Kevin O'Toole
AI: Purpose Driven Policy
5 min readMay 21, 2024

Societally Sound AI Requires Hard Rules

As discussed in OpenAtom 5, the government should build a “National AI and Cybersafety Agency” (NACA) that drives the governance and development of AI. This governance must embrace a dual mandate of ensuring the US has the world’s best AI capabilities while ensuring the industry operates in a safe and ethically sound manner. The FAA/NTSB, coupled with learnings from the NRC and NASA, are an excellent starting point for a viable approach.

Six key pillars should guide NACA.

1 — Hard rules

The FAA and NTSB are not big on soft guidance. “Thou shall” and “thou shall not” rule the day. While they will issue notices and warnings of situations, those are delivered against a backdrop of very hard rules.

It is tempting with AI to avoid rules so as not to stifle innovation or profitability. I’m sure Boeing could get planes to market faster if we removed various safety hurdles. No doubt airlines would be more profitable if we were willing to have a dozen planes crash each year. But we don’t allow that with aircraft and we take any negative impact on the safety record as a matter of great urgency and a critical learning opportunity. The same must apply to AI and cybersecurity.

2 — Deep proactive certification

We don’t allow just anyone to strap an engine to something with wings and go driving down the runway. The FAA maintains very strict definitions of different types of planes, what certification is needed to operate them, and how they must behave in the airspace.

This has not thwarted the nation’s private plane enthusiasts nor has it prevented the US from developing the best aerospace capabilities in the world. To the contrary, this regimen brings confidence and invites investment. It’s also why Boeing’s recent issues have shaken the industry to its foundations.

NACA should operate the same way when it comes to AI. It must quickly classify the types and scale of AI that require oversight and build a proactive application, inspection and certification program to both facilitate progress and enforce safety standards.

3 — Intrusive failure analysis

Imagine what our nation’s cybersecurity posture would be if cyber breaches were inspected with the same determination as a plane crash.

In the airline industry, this engagement is not optional. It wasn’t up to Boeing or Alaska Airlines to call the NTSB for help investigating the door plug failure. Nor did my classmate’s family need to call the NTSB for an investigation when his private plane went down. Plane problems get intrusively investigated. That is simply the way it works.

NACA should operate in the same fashion. When an AI or cyber incident — of any size — happens, the government needs to be front and center putting the pieces back together, assembling the learnings and issuing directives to industry on how to avoid the same problems.

4 — Mandatory compliance

Off the back of failures analysis or proactive rules making, the FAA/NTSB are able to issue mandatory directives. Airlines may be given three months to correct an issue in their fleet. Or 90 days to conduct inspections. They must do this. It is not optional.

Similar mechanisms must be put in place for AI and cybersecurity. When failures occur or new technology intrudes, the government agencies must be able to say, “You have 90 days to demonstrate you have adopted this new measure.”

5 — Shut down authority

The ultimate exercise of aviation authority is the grounding order. The issue with the 737-MAX MCAS system was extreme but useful in making the point. Once the FAA had determined the MAX had a dangerous issue they grounded the entire 737 MAX fleet. It took years to correct and created system-wide difficulties for all involved. The FAA/NTSB never wavered in ensuring the problem was fixed before the MAX was allowed to fly again.

These cases are rare but they do happen and with significant short-term pain. Travel plans are disrupted while companies lose revenue and incur significant expenses. No one likes a grounding order, but it is hugely important to industry discipline.

Imagine if after an AI breach, the government could investigate and say “Unless you have your AI chip farm on software rev X or higher, you must shut down your service until that is completed.” To any company of any size. From the smallest start-up through the largest tech company.

This is how it works in the airline industry, and it will have to work this way to harden the nation against the coming AI storm. The first time a large social network, cloud gaming service, or financial services firm is forced to go offline for a period of days or weeks, it will be chaos. But after that chaos, AI and cybersafety discipline will improve across every sector of the economy.

6 — Controlled Entry for AI Models and Computing Farms

One cannot simply start an airline. The airline industry operates with “controlled entry” discipline. You cannot start your airline until you prove to the government that you can operate safely and are ready to share the skies and airports with others. Similarly, you cannot introduce a new type of aircraft until you have gone through rigorous certification.

These same approaches should be applied to the development & launch of AI models and large-scale computing farms. This can’t be about self-regulation or “giving Washington DC notice.” Rather, until a company and product passes a strict safety regimen, it simply may not enter the market.

Scalable Safety Requires Compliance

One can hear the howls of protest that these rules hamper development of AI. Indeed, it will add a cycle time penalty to deployment of underlying AI infrastructure. Ultimately, that will be a good thing.

Corporations and individuals do not spend any time wondering if United Airlines is safe. Only the most paranoid corporate travelers actually count how many executives are sitting on a single commercial airliner. That level of comfort is necessary for the country to fully realize the benefits of AI development. The side benefit will be thwarting the on-going hacking problems originating from China, Russia, North Korea and elsewhere.

There is also the reality that government regulation tends to favor incumbents who have the scale and financial resources to comply with government oversight. Sadly, this is true. There aren’t a lot of small airlines or bespoke airplane manufacturers.

Aspects of AI governance will be similar, but mostly in those places where scale would have determined winners anyways. It is unlikely that there will be lots of small AI cloud providers, chip manufacturers, or truly bespoke Generative AI models. Entrepreneurs will utilize these scaled platforms and, in doing so, will draft off of the compliance work undertaken by the large companies. This will actually help the entrepreneurs because they will have a higher degree of confidence that the underlying platforms are operated in a sound fashion.

--

--

Kevin O'Toole
AI: Purpose Driven Policy

I write about the need to develop national purpose and governance related to Artificial Intelligence.