AI Regulations: A Healthy Tension with Innovation or Stifling Bureaucracy?

Rebecca K. Lively
Defense Unicorns
Published in
6 min readDec 5, 2023

Navigating the complexities of Artificial Intelligence (AI) regulation is a nuanced and challenging endeavor. Key legislative and executive efforts, such as the Artificial Intelligence in Government Act of 2020, the Advancing American AI Act, and President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, illustrate the ongoing journey to balance innovation with responsible governance. These initiatives aim to harness AI’s potential to transform government operations and national security while addressing critical issues like ethical AI use, data privacy, and algorithmic bias.

The Office of Management and Budget’s (OMB) proposed “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” is the latest attempt to strike a balance. However, in a risk-averse culture, regulation is often used as an excuse for a heavy-handed approach to new technologies which stifles innovation. Does OMB’s draft policy get it right? If you’re interested, comments are due on December 5, 2023. Today, on behalf of my company, Defense Unicorns, I submitted the following comments:

Public Comments on Proposed Policy “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”

I am writing on behalf of Defense Unicorns, a leading provider of platform technologies that empower rapid and reliable delivery of capabilities across diverse mission areas. Our team comprises dedicated and experienced innovators, software engineers, and veterans with a wealth of experience in technology programs within the Department of Defense and the broader federal market. One of our product offerings, AI for National Security (AI4NS), provides a suite of generative AI capabilities to improve understanding of complex mission data and accelerate decision-making that can be deployed to classified or air-gapped environments. Like all Defense Unicorns products, AI4NS is built on our foundation of open-source software to ensure flexibility while avoiding vendor lock-in. AI4NS is one of many products built by small companies that will be impacted by federal policy regarding AI.

While the proposed policy states an objective to “seize the opportunities AI presents,” we believe its near-exclusive focus on risk prevention will contribute to a culture of inaction and indecision that risks harm to the security and advancement of our nation. As a company deeply invested in the success of government technology programs, Defense Unicorns offers the following recommendations to better align the proposed policy with its own stated goals and the goals expressed by the Artificial Intelligence in Government Act of 2020, the Advancing American AI Act, and President Biden’s Executive Order of October 30, 2023:

Include Examples of AI Applications that are Not Rights or Safety Impacting: While the policy justifiably provides substantial focus on limits for AI applications that are rights-impacting and safety-impacting, we encourage the memo to also provide clear examples of AI applications that are not considered rights-impacting or safety-impacting. By delineating an illustrative list of areas where AI can be used with fewer regulatory constraints, the policy can foster innovation and lower barriers to entry for small companies. Examples could include -

- Administrative Automation: AI designed for routine administrative tasks such as scheduling, document management, and basic data entry. These applications typically do not involve decision-making that affects individual rights or public safety and often have a positive impact on job satisfaction.

- Predictive Maintenance in Non-Critical Systems: AI designed for predictive maintenance of equipment or systems, where failure does not pose a direct threat to safety or significant operational disruption.

- Assistance Troubleshooting Technical Tasks: AI tools designed to assist skilled workers in things like debugging software, understanding complex equations, or drafting technical documents.

- Drafting or Research Assistance: AI designed to analyze internal data and policies to aid employees in finding relevant documents and information among government data.

Take Steps to Limit Over-Regulation and Reduce Impacts on Small Businesses: Regulation has a disproportionate impact on small businesses that do not have the large legal and policy teams necessary to interpret and implement regulatory requirements. Additionally, in an area like AI with substantial commercial applications, it also causes businesses to opt out of offering their products or services to the federal government market. As a result, the federal government struggles to access the technology that it needs. To limit this impact, the policy should include steps to protect small businesses and encourage them to work with the government. Possibilities include a tiered compliance framework, public-private partnerships, and expedited review processes for small businesses.

Limit Scope of Procurement Recommendations: On page 22, the policy encourages agencies to include specific risk management requirements in contracts for generative AI. While such restrictions are likely appropriate in situations involving rights and safety or for research and development of new AI technologies, they are likely to be costly or burdensome in situations involving commercially available technology for low-risk activities. We suggest limiting the scope or taking steps to be more clear that the included guidelines should be carefully tailored based on the overall risk of the AI application with an aim toward minimizing restrictions for low-risk systems.

Distinguish Cybersecurity Risk versus AI Risk to Mission:

Cybersecurity risks to an information system hosting an AI capability are distinct from risks related to the AI capability itself. While the proposed policy does a good job of making this distinction, we recommend it be more explicit. Specifically, we recommend that AI risk to mission be handled primarily by those responsible for that mission.

Determining if an information system that is hosting AI capabilities is “sufficiently secure” in the traditional cybersecurity sense is very different from determining if the AI capabilities it provides are “sufficiently accurate” for the mission at hand. Policies like FedRAMP assign responsibility for how a system is secured and dictate who may accept cybersecurity risks to authorize its use. This same person is likely not the best suited to assess or accept the AI risks to mission performance when authorizing a system for use.

Thus, we recommend the policy explicitly distinguish between cybersecurity and AI as unique domains with different types of risks to understand and accept. Then, using a blend of AI education and risk mitigation techniques, mission owners can make appropriate risk decisions about when to augment their capabilities with AI.

The Biggest Risk May Be Too Much Caution: The tendency of federal government decision-making towards risk aversion, especially in the context of AI, can inadvertently lead to strategic disadvantages and introduce some of the very risks it seeks to avoid. The recent decision by the US Space Force to ban the use of AI applications exemplifies this cautious approach. Such caution risks substantial unintended consequences, such as:

- Adversaries Gaining a Technological Edge: When federal decision-making is overly cautious and slow in adopting new AI technologies, it risks falling behind adversaries who may be more willing to rapidly integrate and experiment with AI. This technological gap can lead to strategic disadvantages in critical areas such as intelligence gathering, cybersecurity, and automated defense systems.

- Unsanctioned Use of Unauthorized Systems: Excessive caution and slow approval processes for AI applications (or outright bans) may drive personnel to use unauthorized AI systems out of personal convenience to keep pace with operational demands and the fast-evolving nature of AI. Using these “shadow-IT” systems introduces significant security risks, potentially exposing sensitive data or putting critical systems at risk of breach or manipulation.

- Stifling Personnel Initiative and Morale: Overly restrictive policies can dampen the initiative and morale of personnel who recognize the potential of AI to enhance their work but are hindered by bureaucratic processes. This has the potential to further exacerbate the difficulties in recruiting and retaining skilled personnel within the federal government.

To mitigate these risks, the federal government must strike a balance between caution and innovation. This involves adopting a more agile approach to AI integration, where risks are carefully managed but do not completely hinder the exploration and adoption of AI technologies. Additionally, establishing clear guidelines and fast-track processes for the safe and secure use of AI can encourage adherence to approved systems while keeping pace with technological advancements. We recommend more explicitly adopting such a balanced approach.

Defense Unicorns is eager to contribute to this discourse and welcomes further dialogue to refine these recommendations. We are optimistic that together, we can enhance the proposed AI policy to better serve all stakeholders.

Thank you for considering our comments on the proposed policy changes.

Sincerely,

REBECCA K. LIVELY

Defense Unicorns

--

--