Decoding AI Regulations: Striking the Right Balance for Preventing and Remediating Harms

Fernando Mourao
SEEK blog
Published in
8 min readSep 25, 2023

In this blog post, Fernando Mourao, the Responsible AI Leader in Artificial Intelligence & Platform Services (AIPS) at SEEK, Melbourne, delves into the present obstacles hindering the inclusivity of IT professionals in the AI regulation discourse. He offers a technologically-oriented and objective viewpoint on this intricate subject. The following text represents an example of AI augmentation capability, where ChatGPT-3.5 was employed to enhance the author’s original draft, resulting in a more engaging narrative.

mikemacmarketing, CC BY 2.0, via Wikimedia Commons

In recent months, I have been deeply involved in the global debate on AI regulation. Even as a Responsible AI specialist with a background in Computer Science, the debate has occasionally appeared obscure, intricate, and contentious to me. My impression thus far is that this debate is falling short in one of its key objectives: inclusion. The extensive use of legal jargon, a strong business orientation, and the abundance of ethical and philosophical questions have, in some way, marginalized me from the conversation. Consequently, I’m concerned that individuals with similar technological backgrounds, such as Data Scientists, ML Engineers, and Data Analysts, who are eminently qualified to offer insights on the feasibility and effectiveness of desirable approaches, may not be sufficiently engaged in this discourse.

To achieve inclusion, it is imperative to make the debate accessible and appealing to diverse audiences, each of whom brings valuable and complementary perspectives to the table.

Hence, this post aims to provide a more technologically-oriented and objective perspective on AI regulation, with the goal of involving more IT technical professionals in the discussion — those individuals actively engaged in developing and implementing AI solutions within our society. I intend to translate some of the key concepts I’ve identified into terms that resonate with this audience, giving them a genuine opportunity to share their insights and contribute. I don’t intend to prescribe a specific path to follow or criticize past decisions. Instead, I offer a personal and simplified viewpoint on a vast and highly intricate subject.

Why should we all engage in the debate around AI legislation?

Perhaps the quote that best answers this question is from Caroline Piovesan:

“We are debating the impacts of technology on our values to ensure our creations augment the kinds of society we want to live in, not subvert them.”

I truly embrace the idea of assisting people with taking a humanistic view of technology. The physical world has proven to be severely damaged and unwelcoming for marginalized groups. AI offers us a distinctive opportunity to construct a digital realm capable of blending with and reshaping the physical world, potentially ushering in profound transformations. These shifts may either exacerbate existing issues or serve as a means to address some of them. The resemblance of this emerging digital-physical world will largely hinge on the leadership, values, and beliefs we embrace at this pivotal moment.

What exactly are we debating?

We are currently engaged in a debate regarding the minimal and essential legislative prerequisites to ensure that AI-based technology contributes to the kind of society we aspire to live in. From a legislative standpoint, I believe that there are four fundamental requirements for any organization providing AI services to society:

  1. Legal Compliance: You must adhere to all relevant laws and regulations (i.e., general and sector-specific laws)
  2. Harm Prevention: You should make every effort to mitigate known risks and prevent harm
  3. Effective Remediation: In the event that harm occurs, you must respond swiftly and effectively to remediate it
  4. Accountability: You are responsible for the adverse consequences of your business operations and decision-making processes.

When framed in these terms, these requirements are both straightforward and reasonable. They are, in fact, foundational principles applicable to businesses across various industries. In this context, the primary objective of legislative tools is to ensure the enforcement of these fundamental prerequisites. I also call out a desirable characteristic of legislative answers to this problem that is often forgotten:

AI is a great tool that could assist humanity in improving ethical standards. Hence, legislation should also recognise and protect the potential of using AI to uplift ethical baselines and solve existing harms in our society.

While one may tweak definitions or add extra requirements, there’s nearly a consensus on the understanding of why and what we’re debating. However, significant disagreements arise when we delve into the question of HOW to transform these abstract requirements into practical guidelines and specifications for organizations. I believe professionals with a technological background can make the most significant contributions at this juncture in the debate.

How can we translate essential legislative prerequisites into tangible practices and specifications?

Numerous issues come to light when we dissect the aforementioned prerequisites and delve deeper into certain conditions. What exactly constitutes “every effort to mitigate risks”? What are the “known risks”? Who has the authority to determine what is deemed acceptable? For a legislative framework to become effective and beneficial for society, we must define such matters in a way that minimizes ambiguity and uncertainty in their interpretation.

In sectors like pharmaceuticals, healthcare, and insurance, among others, a wealth of knowledge has been amassed over decades to address these very questions and meet these essential legislative requirements. This accumulation of expertise has ultimately resulted in positive societal impact and progress within their respective domains. As a result, risk management has emerged as the gold standard process for navigating through this complexity effectively. For this reason, almost everyone suggests making risk management the starting point for fulfilling these exact legislative requirements for AI.

Nonetheless, effectively implementing risk management in the realm of AI is far from straightforward. The challenge becomes evident in the absence of consensus among emerging regulatory frameworks worldwide. The inherent intricacies of AI, combined with its relatively young status as a field, its widespread utilization across nearly all sectors of the economy, and the historically limited application of risk management to AI, create significant hurdles when attempting to directly transfer knowledge from other domains into the domain of AI.

What makes risk management so different for AI?

My concise and simplified answer is: control of impacts. Risks and harms are inherent in all human activities and naturally accompany the use of technology to shape our environment. For instance, in the construction sector, we can readily identify risks linked to the use of tractors and heavy machinery, which are notably high.

So, why should legislators approach AI risks differently compared to the risks associated with tractors? The fundamental premise in our interaction with technology, up to the present day, has been that humans maintain substantial control over these tools. Adverse impacts arise either unconsciously or occasionally due to conscious and irresponsible actions by human operators. Except for defects and faulty operation, humans possess the capability to manage the impact of technologies like tractors, even unconsciously assessing reasonable limits of their application.

However, AI possesses unique characteristics that challenge human control over its impact. First, AI operates autonomously, allowing it to extrapolate behaviours beyond its intended design. Second, AI lacks consciousness, a system that gauges the appropriateness of its behaviours according to values, ethical norms, and common sense. Consequently, the level of control we currently exercise over AI is significantly more limited than what we typically have in other domains.

While these characteristics have implications for all four fundamental legislative requirements, I’ll narrow the focus of this post to requirements #2 and #3. These are areas where technical experts can make substantial contributions, as requirements #1 and #4 extend beyond the typical boundaries of IT knowledge.

What makes prevention and remediation of harm challenging for AI?

The primary objective is always to prevent adverse events from occurring. Ultimately, what truly matters is the impact we have on individuals, organizations, communities, and society. However, if we concentrate solely on monitoring impacts before taking action to mitigate them, as stipulated in requirement #3, we run the risk of paying the price for allowing negative occurrences to unfold. This is why the risk management framework became a preferred approach for AI regulation worldwide. Risk is the chance of harm occurring under defined circumstances, whereas harm is an adverse outcome. By focusing on establishing reliable risk management, we’re targeting to build effective prevention.

While both preventive and corrective strategies must coexist, prevention should take precedence. Achieving satisfactory levels of understanding and predictability that enable effective prevention and remediation for AI proves to be a considerable challenge in practice.

Figure 1 illustrates the key variables I’ve identified as the foundation for remediation and prevention requirements in the real world. I will employ technical terminology to outline this simplified worldview. In general, controlling the impact of AI requires a delicate blend of qualitative and quantitative indicators across four primary dimensions: Applied Knowledge, Visibility of Decisions, Visibility of Impacts, and Behavior Prediction.

Figure 1. Simplified worldview of requirements for ensuring effective prevention and remediation of AI-related harm.

Applied Knowledge refers to a substantial repository of reusable, factual, and specific insights into actual impacts observed over time. Inspired by Mark Zuckerberg’s famous quote, “Move fast and break things,” the business strategy adopted years ago did not prioritize risk management from a socio-technological perspective. Consequently, our current pool of applied knowledge regarding AI risks remains limited.

Visibility of Decisions pertains to the proper design, management, and documentation of intentions and decisions during the development of AI products. This aspect is still evolving in the context of this recent socio-technological lens. Notably, we have witnessed the emergence of various standards and frameworks related to AI decision-making, including ISO/IEC 23894, ISO/IEC 23053, and ISO/IEC FDIS 42001.

Visibility of Impacts encompasses efforts to standardize metrics, methods, and methodologies for defining reliable quantitative indicators to assess the impact and behaviour of AI at various stages of its lifecycle. The widespread use of AI across multiple domains, scenarios, and sectors makes it challenging to establish assessment strategies that apply to all potential AI deployments.

Finally, Behavior Predictions refer to endeavours to establish ethical and reliable strategies to test AI with humans and simulate unintended or undesigned behaviours, especially when adverse outcomes are anticipated.

How to bridge the gap for technical IT experts in the AI regulation debate?

The conceptual framework outlined above can be a valuable resource for IT professionals seeking to develop well-informed perspectives on the execution of risk management in AI. The risk of implementing unfeasible or ineffective proposals arises when decisions regarding these dimensions are not aligned with a technical understanding of AI. For instance, an overemphasis on qualitative signals, which serve as proxies for actual impacts, or on quantitative indicators that introduce measurement uncertainty, can compromise the effectiveness of any legislative tool.

This conceptual framework also enables us to compare the main AI risk frameworks discussed worldwide. The EU AI Act aims to extensively minimize AI-related harms by prioritizing prevention. Consequently, it mandates specific requirements to ensure high visibility of decisions and impacts.

In contrast, the UK approach initially focuses on rapid remediation. It gradually enhances organizations’ capacity for prevention by standardizing applied knowledge and recommending monitoring standards, all while assisting organizations in improving the visibility of their decisions.

Meanwhile, the US AI Risk Management framework seeks to strike a balance between prevention and remediation by promoting standardized recommendations to enhance the visibility of decisions. Presently, the US framework suggests processes, practices, and procedures to guide AI's design, development, and project management to improve risk management and harm prevention.

So, what represents the most effective approach for striking the right balance between prediction and remediation for AI? From a technical standpoint, what should legislators prioritize and consider? I look forward to hearing your thoughts!

--

--