Is Built-in AI Regulation the Wave of the Future?

Researchers make the case for embedding compliance into the systems’ design

MIT IDE
MIT Initiative on the Digital Economy
6 min readAug 12, 2024

--

By Irving Wladawsky-Berger

We’ve become accustomed to a new era of smart, connected products with the rise of the Internet of Things (IoT), Big Data, and Artificial Intelligence. As the world’s digital and physical infrastructures converge, digital technologies are designed right into complex products — e.g. jet engines, power generators, medical equipment, and energy pipelines. Massive amounts of usage data can now be gathered over the internet, stored and analyzed by sophisticated applications to help monitor the product and anticipate potential problems.

This is particularly important in industries where safety by design can have a significant economic impact and in some cases, literally save lives. Safety by design features are now in widespread use in all kinds of mechanical, electronic, and other complex physical systems.

What if we could similarly embed regulatory objectives directly into the technical design of AI systems? asked Robert Mahari in his keynote on Regulation by Design at the recent 2024 MIT IDE Annual Conference. Mahari received a JD at the Harvard Law School in 2022 and is now a PhD candidate in the MIT Media Lab research group led by professor Alex ‘Sandy’ Pentland.

“Compliance and regulation by design represent a risk-management paradigm that’s uniquely suited for AI,” Mahari said. “Intelligent technology design can proactively prevent failures and risks.”

It’s an intriguing concept that could address several thorny challenges.

AI systems are the result of a very complex supply chain, including the choice of the large amounts of data needed to train AI models, and the difficulty of testing, predicting, and explaining the models’ expected behavior. As a result, compliance with regulatory and ethical objectives is very hard for the users of AI-based applications who have little control or understanding of how the AI system works.

Instead, as is the case with advanced engineering systems, Mahari proposes ways for an AI system to monitor how they are used, identify high risk sessions, alert its developers and overseers, and possibly ensure compliance.

3 Examples

Mahari cited three concrete applications of Regulation by Design:

  1. Data Quality. It’s important to make sure that the quality of the data used to train the AI model is sufficiently representative and relevant for its intended purpose, as well as free of errors to the best extent possible. In general, training data is highly fragmented because it comes from many different sources, making it hard to anticipate its actual predictions. How can we measure the overall quality of the training data? How can we establish benchmarks to eliminate or reduce the risk of biased output.
  2. Copyright infringement. In principle, copying violations can be established by showing evidence that the alleged infringement is substantially similar to the original work on which it’s based. But copyright infringement is not a purely objective matter. Can an AI system take two inputs and quickly assess whether they’re substantially similar in the kind of way a human would perform such a subjective assessment?
  3. Corporate policies. To help increase employee productivity and quality, companies are increasingly deploying AI agents to assist them in making job decisions. But, in the end, the company is responsible for the quality of the assistance generated by its AI agents. Can we find ways to make sure that the assistance provided by the firm’s AI tools is in accordance with its policies?

Mahari’s keynote is based on a recent article, “Regulation by Design: A New Paradigm for Regulating AI systems,” co-authored with professor Pentland that appeared as a chapter in a book published in February of 2024, Digital Single Market and Artificial Intelligence: AI Act and Intellectual Property in the Digital Transition. [Read the IDE Research Brief summary of the article here.]

In the article’s introduction, Mahari and Pentland nicely explained Regulation by Design by comparing it to ex-ante and ex-post regulations.

“AI regulation, and for that matter most regulation, can be broadly categorized into two types: ex-ante and ex-post,” they wrote. The EU’s recently passed AI Act exemplifies the ex-ante approach which hinges on assessing the risks that an AI system poses and identifying methods to manage these risks before the system reaches the market. By contrast, liability regimes — such as the one in the United States — are examples of ex-post regulation where the entry of an AI system onto the market is largely unregulated but providers of the system face liability if harms arise.”

Regulation by Design seeks to embed regulatory objectives directly into the technical design specifications.

“An easy way to understand how Regulation by Design differs from the other approaches is through the example of self-driving cars. Under an ex-ante regime, the risks associated with a self-driving car would be assessed before it is allowed to be sold as a product. This might involve testing the car in a controlled environment and comparing it to human drivers under various circumstances. Under a pure ex-post regime, the self-driving car could be released without requiring risk assessment and the car manufacturer would be held liable for any accidents that occur. Of course, this risk of liability may induce the manufacturer to conduct an independent risk assessment to minimize the risk of liability.”

“Under a Regulation by Design approach, the car manufacturer would be required to integrate regulatory objectives (e.g., minimizing accidents, avoiding congestion, and reducing emissions) into the design of the self-driving system.

The goal of this approach is to leverage the ability of AI systems to optimize their performance within a set of constraints and to make important regulatory objectives part of the design process at an early stage.”

Photo by Samuele Errico Piccarini on Unsplash

Privacy is an early example of integrating regulatory objectives into the design of technical systems. In 2016 the EU adopted the General Data Protection Regulation (GDPR) which requires the implementation of technical and organizational measures to ensure that “only personal data which are necessary for each specific purpose of the processing are processed.”

“Privacy lends itself to Regulation by Design because it is measurable and auditable.” There are a number of quantifiable metrics for privacy that can be integrated into IT systems. In addition, privacy is highly valued by consumers, so companies are incentivized to conduct and publish the results of privacy audits.

Preconditions Needed

The authors add that three key conditions are necessary to effectively design regulation-by-design systems:

  1. Consensus on specific objectives. To be effective, a technical system requires a precise understanding of the key objectives to be achieved. “Without consensus on the specific priorities regulators wish to achieve, it is not possible to define quantitative performance goals which AI systems can be optimized for.”
  2. Metrics to measure success. Regulation by Design treats metrics as the objectives to be achieved. But many regulatory objectives may not be concrete enough to be directly measurable. For example, GDPR does not concretely specify how companies should measure Privacy by Design. In such cases, the regulators can delegate the definition of the metrics to private institutions, and encourage the institutions to update their approaches as the technology evolves. “[M]etrics should be regularly re-evaluated to ensure that they continue to incentivize progress in the correct direction.”
  3. Auditing mechanisms. “Regulation by Design relies on technology developers to embed regulatory objectives into their products by optimizing them to meet certain metrics or criteria. Auditing mechanisms are necessary to ensure that technologies are optimized in this way and to surface instances when technologies fail to adequately meet the defined metrics.” Auditing mechanisms include: external auditors, like government bodies or private third parties; open source technologies that allow anyone to gain insight into the design; and asking companies to responsibly audit themselves.

“Embedding important priorities in technical designs has long been a gold standard for engineers,” conclude Mahari and Pentland.

“The adaptive nature of AI systems gives regulators an opportunity to take a page from the engineers’ book and to embed regulatory objectives into technical designs.

Rather than regulating AI system ex-post through liability, or laying out ex-ante requirements, we urge regulators to seize this opportunity to work with technologists to design AI systems that internalize regulatory priorities. … Regulation by Design places important goals front and center in the technical design process and prevents harms before they occur.”

This blog first appeared July 25 here

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.