AI regulation — Beyond rules to adaptive regulation

LaSalle Browne
16 min readJan 29, 2024

--

AI regulation, created by the author in Midjourney

Quick Note: This is Part I of a two-part on AI regulation

Introduction

Welcome to the age of AI or as I like to think of it, the age of hysteria. AI is an issue legislators barely paid attention to a year ago. What a difference a year makes. Currently, there is a heated global debate among governments on whether to restrict or outright outlaw specific use of artificial intelligence technologies. As artificial intelligence capabilities accelerate, regulators face an urgent challenge — how to establish effective oversight which keep pace without frustrating further advancement. The pace and direction of artificial intelligence’s economic change will depend on who leads the charge — China, the European Commission, or Congress. This might restrict the use of AI by all businesses to communicate directly with customers or safeguard specific industries. Potentially inhibiting innovation or reducing the potential impact of disruptive technologies? It’s not quite that easy. Numerous parties exist, each with unique motivations and incentives. These compel them to actively advocate for regulation. Take the following quote from an HBR article on regulation:

Given the potential scale of this disruption — as well as issues such as privacy, bias, and even national security — it’s reasonable for lawmakers to take notice. Think of Goethe’s poem “The Sorcerer’s Apprentice,” animated in Disney’s classic movie Fantasia, where the sorcerer returns to his workshop to find that his apprentice has unleashed forces that have quickly gone out of control, threatening to destroy everything in sight until the magician restores order. Many of those concerned about AI’s possible unintended consequences, including developers such as Altman, are looking to lawmakers to fill the sorcerer’s role. See HBR article on AI Regulation here.

However, relying solely on rigid rules risks disruption, as AI systems learn and adapt at a pace exceeding legislative processes. To foster accountable development aligning with societal needs moving at different speeds, a more nuanced approach is required.

Constructing a regulatory system that is both dynamic and adaptive requires a foundational framework capable of navigating the complexities of rapid technological advancement. This is where the strategic concept of pace layering becomes instrumental. Pace layering offers a nuanced perspective, acknowledging that different components of our society and technology progress at varying speeds. It forms the bedrock of an AI regulatory approach that is not just about implementing rules but about fostering an ecosystem where process, feedback, and technology converge harmoniously. By focusing less on extensive laws than targeted statutes and ordinances tackling emergent issues, governance can synchronize with AI’s fluid nature while safeguarding slower-changing societal bedrocks from disruption.

Central to this approach is the recognition that process, and feedback mechanisms are as vital as the technological solutions themselves. By emphasizing these aspects, we can create a regulatory environment that adapts in tandem with AI’s rapid development, ensuring that innovation is not stifled but rather channeled responsibly. Complementing this through innovative solutions attuned to AI’s learning nature, such as “black box” data recorders enhancing transparent evaluation, offers dynamic oversight respecting technology’s momentum. Cultivating a multi-layered approach synchronizes responsible advancement in step with society rather than thwarting progress or enabling unchecked change yielding to backlash. The idea is to set boundaries through regulation that influence, but do not corrupt or co-opt, the marketplace. This method offers a pragmatic balance, allowing for the harnessing of AI’s potential while ensuring that the pace of technological change does not eclipse the essential values and norms of society.

Regulation: Friend, Foe, or something else completely?

Truth be told there are reasons to both argue for regulation and to argue against regulation. Should we unleash the genie in our desire for it to grant our wishes? Or should we put the genie back in the bottle because granting wishes sometimes gets a little messy or unfair? I’m not going to rehash the main points on each side. If readers want to dig deeper, here are some articles that capture the key points on each side.

· Pro Regulation by Devansh: https://open.substack.com/pub/artificialintelligencemadesimple/p/how-to-actually-regulate-ai-thoughts?r=1fz9pv&utm_campaign=post&utm_medium=web

· Anti-regulation by Elad Gil: https://blog.eladgil.com/p/ai-regulation?r=1fz9pv&utm_campaign=post&utm_medium=web

While most of the arguments around regulation are binary; to regulate or not to regulate. I think there is another way. It might be a middle road, or possibly a counterintuitive way, depending on your perspective. It is based in part on Stewart Brand’s concept of Pace Layering. Simply, different elements of complex systems move at different paces. As a concept it allows us to leverage existing systems and process (lower layers) to build a bridge to the newer, faster, changing, “future” (higher) layers. In AI regulation, this means moving from static rules that add complexity over time or through scale. We need an adaptive regulatory system that can keep up with fast-moving changes, without altering the core fundamentals. But first let us dig a little deeper into the concept of pace layering and pace layering in AI regulation.

What is Pace Layering?

Pace layering is a concept developed by Stewart Brand which explains how different layers of a society change at different speeds. In the context of AI regulation, this framework becomes a critical tool for understanding and managing the complexities of technological innovation and societal adaptation.

At the outermost layer, we find technology, specifically AI, evolving rapidly, driven by incessant innovation and market competition. Beneath this lies the layer of commerce, where business practices and economic policies must adapt to technological changes, though not as swiftly as technology itself changes. Further on, we encounter the layers of infrastructure, culture, governance, and finally nature, each moving at a progressively slower pace. Governance, particularly relevant to our discussion, evolves much slower than technology, leading to a lag in regulatory responses to new AI developments.

Implications for AI Regulation

This disparity in the rate of change presents a unique challenge for AI regulation. Rapid advancements in AI can outpace the legislative process, leading to a gap where new AI applications operate in a regulatory vacuum. On the other hand, when regulation does catch up, it risks being outdated, addressing yesterday’s issues rather than today’s, much less tomorrows.

Pace layering in AI regulation calls for a dynamic approach. It requires regulations that are adaptable enough to respond quickly to technological changes, yet stable enough to maintain societal and ethical standards. This approach acknowledges that while rules and guidelines must evolve in response to the outer layers of rapid technological change, they must also be rooted in the slower-changing layers of cultural values and social norms. How do we do this?

Rules: Essential Yet Flawed

The debate around regulating artificial intelligence has largely focused on establishing ethics guidelines and rules to govern how AI systems are developed and used. While establishing principles of accountability and fairness is important, relying solely on rules risks missing the broader dynamics of how AI technologies learn and evolve over time.

At their core, rules are abstractions and simplifications of reality. They function as a conditional bounded probability space, representing a limited set of circumstances. Rules excel at making predictions as long as the situation they’re applied to fits within their specified boundaries. But, like any model, rules have limitations. When reality ventures beyond these constraints, rules inevitably fail.

For instance, consider the early days of the internet. Legislations, initially, were either nonexistent or ill-fitted for the digital realm. Only as the online world matured and its implications became clearer did the regulations evolve to provide a more adequate framework.

Feedback Loops: The Heart of Learning

AI systems, much like humans, learn from feedback. Without rules or “predictions/forecasts”, AI systems cannot get the feedback (positive or negative) that helps them improve. This feedback mechanism is foundational for their learning processes, and it’s here that the crux of effective AI regulation lies.

Thinking about AI in terms of learning cycles offers a paradigm shift. Instead of just focusing on the output, regulators should consider the entire process: from input to learning and back to output. This emphasizes not only the ethics of AI operation but also acknowledges the importance of feedback in ensuring that AI systems evolve in socially beneficial ways.

From Laws to Statutes & Ordinances

One of the main challenges in AI regulation is the disparity between the speed of technological advancement and the typically slower pace of legislative processes. Laws, by nature, are broad, overarching principles, whereas statutes and ordinances or case laws delve into specific situations, offering more granularity.

To bridge this temporal gap, we could shift the focus from crafting new, overarching laws to developing more responsive statutes and ordinances based on existing laws. This approach would allow for quicker adaptations, addressing specific issues as they emerge without waiting for a complete legislative overhaul.

For example, rather than waiting for a comprehensive law on autonomous vehicles, individual cities or regions could implement ordinances that cater to their specific needs and conditions, while still being aligned with a broader national or international framework.

Rethinking AI Regulation: Moving Beyond Rules Alone

If rules alone are not enough. What are we missing from our current approach to regulating AI? If rules are a limitation to progress on both AI and regulation, where do we look?

First it is important to acknowledge this limitation in the context of AI, where systems are designed to learn from data, adapt to new environments, and make predictions even amid uncertainty. The very strengths of machine learning — flexibility and generalization — mean models will inevitably encounter situations that fall outside the constraints used to develop regulatory rules. At that point, rules risk becoming irrelevant or even counterproductive if they inhibit a system’s ability to respond appropriately.

Rather than focusing exclusively on rules and ethics guidelines, AI regulation could benefit from additionally considering the learning cycle and feedback loops that are intrinsic to how AI technologies progress. Systems are designed not just to follow rules, but also to receive feedback on their predictions and decisions to continuously improve. An effective regulatory framework should acknowledge and enable this iterative process.

By embracing the inherently probabilistic and feedback-driven nature of machine learning, regulation has the potential for a nimbler, outcomes-focused approach compared to rigid rule-setting alone. The goal would still be accountability and protection of human values. But recognizing both the limitations of simplified models and the learning dynamics of AI could yield a regulatory structure better suited to foster responsible innovation at scale.

Some government regulation is inevitable. Here we are just laying the groundwork to remove regulatory complexity and prevent regulatory capture. The true enemies of safety and innovation. Moreover, I would like to stress that the removal or elimination of risk should not be the goal of any regulatory action. Risk assessment and adjustment are important feedback and learning mechanisms.

The Role of Risk & Human Psyche, or Why we shouldn’t try to eliminate feedback?

Humans are quite terrible at performing risk assessment. They also struggle with thinking in probabilities. Think of probability as a mathematical way of capturing uncertainty. Remember from my earlier paper on trust, that trust is a way humans capture uncertainty. Humans try to deal with uncertainty by removing it or engineering it away. Nassim Taleb goes into detail on why this is in his seminal work, Antifragile. In it, he explores this concept in detail. He identifies the key elements like domain dependence, the function of time, heuristics, and biases that we use to make decisions, assess risk, and deal with uncertainty. He identifies that the removal of risk does not always result in increased safety. Often, the exact opposite is true. Depriving systems of stressors, the keys to hormetic adaptation, can weaken it rather than strengthen it. Making it less safe in the long term. Optimizing and engineering out “know risks’ just leaves us more vulnerable to unforeseen or unknowable risks. Less frequent shocks but bigger and more destabilizing ones when they occur. He makes two other key contributions as it relates to regulation of AI and feedback more broadly. One is the very human desire to intervene or want to intervene, to act, even when not necessary. Which he termed generalized iatrogenic — when interventions or their side effects do more harm than good. Second, is the idea of skin in the game — the idea that if someone is getting upside that they should have exposure to the potential downside as well. Terms like risk shifting and regulatory capture largely encapsulate this idea. They describe someone or a group gaining upside without downside. Or the ability to capture the upside on their own while distributing the downside and risk to everyone else.

In his article “The Enemy Gate is Down,” Michael Woundenberg discusses the dangers of this generalized iatrogenic approach. He states we tend to “present new and dynamic problems that are difficult to solve with traditional perspectives or paradigms of analysis. Solutions often fall victim to common pitfalls where the expected answers are more technology, more complexity, and the drive to do something new or different.” In it he wasn’t specifically talking about AI or AI regulation, but he might as well have been. He goes on to offer some words of wisdom that those seeking to regulate AI would be wise to heed. “Problems are not always unique or new,” he advises. “Always look for the commonality of your problem to others to find proven solutions.” The solutions to problems are not always technology.” In short you can’t tackle complex problems by heaping more complexity on top of it in the form of new technologies or rules. We can leverage existing solutions to similar problems but apply them in new ways. Innovate around process and tactics not technical solutions. Although often if we innovate around the processes we can layer in the technology for further impact.

Given these facts about human nature, regulatory capture, competition, and the speed of change driven by AI, how can we rethink regulation? How can we leverage existing solutions in new and different ways.

Dynamic and Adaptive Regulation

A key to this new approach is leveraging known problems and solutions. They can be applied in new ways to address key challenges. For example, how to safely regulate a vital, fast-changing field like AI without stifling innovation. To set the stage we are going to take a journey. Present day air travel is one of the safest modes of travel. Statistics from the US Department of Transportation show that between 2007 and 2016 there were 11 fatalities per trillion miles of commercial air travel. This is starkly different from the 7,864 fatalities per trillion miles of travel on the highway. You can check the statistics here: fatalities and miles of travel per mode of transport. Incremental improvements to air travel are a marvel of technical innovation. Yet, it wasn’t always that way. The key has been a unique public private partnership (NTSB + industry + pilots). It has been aided by a key and unique piece of technology — the black box.

Is it time for an AI NTSB?

AI Safety and Investigation team, created by the author in Midjourney

While it would be great to take credit for coming up with the idea, it has been proposed numerous times by several different individuals and organizations. The most comprehensive proposal I have come across so far, has been the one put forward by the Center for Security and Emerging Technology) (CSET). CSET laid out a blueprint in their 2021 policy brief “AI Accidents: An Emerging Threat — What Could Happen and What to Do,” A key element of the policy brief was the creation of the Artificial Intelligence Incident Database (AIID) — a project housed at the Partnership on AI.” I think they largely got this right so if you would like to delve deeper click on the links. What they forgot was the key piece that makes NTSB function — the black box.

What is the black box?

The black box is a critical component for flight safety and investigation in the airline industry. In aviation, a black box refers to two main components: the Flight Data Recorder (FDR) and the Cockpit Voice Recorder (CVR). Here’s a brief overview of their roles:

  1. The FDR records numerous flight parameters, such as altitude, airspeed, and engine performance. It also records flight control positions and many others. This data is crucial for understanding the aircraft’s performance and behavior during a flight.
  2. The CVR records conversations in the cockpit, including communication between pilots and with air traffic control. It also captures any ambient noises in the cockpit. This can provide context about the pilots’ decision-making processes. It can also reveal any issues they may have been discussing.

Key Functions of Black Boxes:

  • Accident Investigation: In the event of an aviation incident or accident, black boxes are invaluable for investigators. They help understand what happened and why. By analyzing the data, investigators can reconstruct the events leading up to the accident.
  • Safety improvements rely on black box data. It is crucial for identifying safety issues. It is also crucial for improving aircraft design and aviation procedures. They play a significant role in making air travel one of the safest modes of transportation.
  • Regulatory Compliance: Black boxes are mandatory in commercial aircraft. They are regulated based on their specifications and the data they must record.

As this example shows, good technology is not a solution by itself. You need to mate it to good processes and tactics, and strategy to drive the outcomes you want. If the process, tactics, and strategy are bad, good technology will yield bad outcomes. No matter how good the technology. I like to think of technology as leverage in action or application. How can we apply this to AI?

AI Black Box

By applying the concept of an airplane black box to AI, with an AI Model Data Recorder (MDR) and an AI Interface Recorder (UXR), is a highly innovative and interdisciplinary approach. It addresses key challenges in AI related to accountability, transparency, and ethics, especially in critical applications. Let’s explore how these components could function:

AI Model Data Recorder (MDR)

  1. Function: The MDR would record the data inputs, internal parameter changes, decision-making paths, and outputs of an AI model. This could include the data being processed, changes in the model’s weights during learning, and the model’s responses to different inputs.
  2. Utility: In case of a failure or unexpected outcome, the MDR data could be analyzed to understand what the model was ‘thinking’ and why it made certain decisions. This would be crucial for diagnosing issues related to model biases, data quality, or algorithmic errors.
  3. Parameters Tracked: It might track data like algorithmic changes, training data samples used at different stages, model parameter adjustments, computational efficiency, and system environment factors.

AI Interface Recorder (UXR)

  1. Function: The UXR would record interactions between the AI system and its users, especially in high-stakes scenarios like medical diagnosis, financial decision-making, legal judgments, or military applications.
  2. Utility: This would help in understanding how users are interacting with the AI system, the decisions being made based on AI input, and the context in which these decisions are made.
  3. Data Captured: It might include user queries, AI responses, user reactions or adjustments to those responses, and any overriding decisions made by human operators.

Implementation Considerations

  1. Data Privacy and Security: Implementing such recorders must carefully consider data privacy and security, especially when recording sensitive user interactions or personal data.
  2. Regulatory and Ethical Frameworks: Clear guidelines and standards would be needed to govern when and how these recorders are used, accessed, and analyzed.
  3. Scope and Scalability: Deciding which applications are ‘critical’ and require such recording, and ensuring the system is scalable and doesn’t overwhelm storage and processing capabilities.
  4. Access and Analysis: Determining who has access to this data, under what circumstances, and how it is to be analyzed and used for improvements or accountability.

Impact on AI Development

  • Enhanced Transparency and Accountability: Such a system would greatly improve the transparency and accountability of AI systems in critical applications.
  • Informed Improvements: The insights gained from these recorders could be used to refine AI models, making them more reliable, fair, and aligned with ethical standards.
  • Public Trust and Acceptance: Implementing such measures could enhance public trust in AI, particularly in sensitive areas.

What else does it mean?

The concept of an AI ‘black box’ system, encompassing an AI Model Data Recorder (MDR) and an AI Interface Recorder (UXR), offers a dynamic and adaptable solution that aligns well with the evolving nature of AI technology and its applications. Here’s why it’s particularly effective:

Dynamic Adaptability

  • Flexibility to Evolve: As AI technologies and applications evolve, the parameters and types of interactions recorded by the MDR and UXR can be adjusted accordingly. This flexibility ensures that the system remains relevant and effective over time.
  • Responsive to Emerging Trends: This approach allows for quick adaptation to new developments in AI, whether they are technological advancements, emerging use cases, or societal changes.

Learning and Growth

  • Continuous Improvement: By providing detailed insights into AI performance and user interactions, these recorders can inform continuous improvements in AI systems, enhancing their effectiveness, fairness, and safety.
  • Informed Policy and Ethical Guidelines: The data gathered can guide policymakers and ethicists in developing more informed and relevant regulations and guidelines that evolve with the technology.

Encouraging Responsible AI Development

  • Promoting Transparency and Accountability: Such a system can play a crucial role in promoting transparency and accountability in AI, particularly in critical domains.
  • Building Public Trust: By demonstrating a commitment to monitoring and improving AI systems, this approach can help build public trust in AI technologies.

Practical Implementation

  • Non-Intrusive Oversight: This method provides a way to oversee and understand AI decision-making processes without necessarily hindering the AI’s development and capabilities.
  • Balancing Innovation and Safety: It strikes a balance between encouraging innovative AI development and ensuring the safety and reliability of AI applications.

Conclusion

In essence, understanding and applying the concept of pace layering in AI regulation provides a balanced, nuanced approach. It allows for the creation of a regulatory environment that is flexible and responsive to rapid AI advancements while being deeply anchored in the slower changing, but fundamentally important, societal and ethical values. This strategy ensures that as AI continues its rapid development, it does so within a framework that safeguards human interests and societal well-being.

In conclusion, the journey towards effective AI regulation is not just about establishing rules; it is about constructing a dynamic, adaptive system grounded in the strategic framework of pace layering. This concept provides a roadmap for aligning the rapid evolution of AI with the more deliberate pace of societal and cultural change. In this model, the process and feedback mechanisms are as crucial as the technological solutions themselves, ensuring that regulation evolves responsively with AI advancements.

Establishing guiding principles focused on iterative accountability and ownership over absolute laws, supplemented by innovative public private partnership, technical tools like AI recorders, cultivates balanced progression. Such an approach respects technology’s momentum while safeguarding society, synchronizing what changes with what remains unchanged.

Principled progression is not tools alone but also the processes sustaining continual improvement. Open evaluation, transparency, and adaptation nurture responsible learning over unilateral control. Just as feedback loops drive AI development, so too must reflection infuse governance and oversight. In part II, we will begin explaining the role of friction in dynamic and adaptive AI regulation. Along with proposing several other potential solutions.

--

--

LaSalle Browne

Quantum thinker, entrepreneur, explorer, ever curious, always learning, traveler, lover of life. Opinions are my own and may change w/o notice