Decoding Canada’s Directive on Automated Decision-Making

Henry Fraser
Automated Decision-Making and Society
9 min readMay 23, 2024

A blueprint for AI ‘guardrails’?

Jacqueline McIlroy, Sara Luck and Henry Fraser

As Australia considers the development of ‘mandatory guardrails’ for high-risk Artificial Intelligence (AI) systems, there are interesting lessons to learn from Canada’s regulation of government use of automated decision-making (ADM). Canada’s Directive on Automated Decision-Making (Directive) dictates how Canadian government institutions should go about developing and deploying ADM. The Directive focuses on processes that encourage fairness, accountability and transparency in government decision-making, rather than prohibiting particular use cases or outcomes.

It’s a good time to assess potential approaches to AI regulation in Australia. Last year’s Royal Commission on the Robodebt scandal revealed just how harmful government ADM can be. Government agencies desperately need guidance and clarity to avoid future tragedies. In January this year, the Australian government committed to considering and consulting on legislation to introduce ‘mandatory guardrails’ for high-risk AI systems. The government appointed an Artificial Intelligence Expert Group in February 2024 to advise on these guardrails, but its mandate lasts only until June 2024. The short timeframe means that the Expert Group will probably need to build these guardrails from an existing blueprint, rather than trying to develop regulation from scratch.

Photo by Towfiqu barbhuiya on Unsplash

Given that the government gave fairly extensive consideration to Canada’s Directive in its discussion paper on Safe and Responsible AI in Australia last year, the Directive is a strong contender. Its simplicity and clarity make it an appealing blueprint, as does its ‘risk-based’ approach that matches compliance burdens to the level of risk. It also fits the bill as a set of ‘guardrails’, although it does not use that term. So we’re going to write here about what ‘guardrails’ means (it’s not self-evident), and what the Canadian Directive reveals about the strengths and limitations of a ‘guardrails’ approach to AI regulation.

We’ll use the term ‘AI’ loosely to include data-driven and automated decision-making. Throughout its consultation process regarding AI regulation, the government has indicated that regulation of AI and ADM will be closely intertwined. Rather than getting hung up on definitions, and the fine distinctions between AI and ADM, we’ll work from the assumption that there is a lot of overlap between AI and ADM. Although the Directive only applies narrowly to government applications and ADM systems, and it does not directly regulate any other actor in the supply chain, it is nonetheless an interesting blueprint for pithy and neatly communicated ‘guardrails’. Its lessons are likely to be relevant across the supply chain of a wide range of applications of AI and ADM, and especially to government use of AI and ADM.

What are guardrails?

Guardrails could mean different things. In some quarters (among AI developers and data scientist) guardrails means technical controls built into the development process and training of an AI system. But the Australian government seems to have something broader in mind: frameworks, practices, processes, legal requirements that direct development and deployment, which go beyond technical controls. One example of guardrails might be the NSW AI Assurance Framework, a standardised process of impact assessment and documentation (and in some cases oversight) for the development and deployment of AI by the NSW government. ‘Guardrails’ evokes the image of a barrier along the edge of a road or path to stop serious falls or collisions; or perhaps of the railings at a bowling alley that beginners can use to stop balls from rolling into the gutters. Guardrails direct, protect and enable. They don’t convey the sense of a ban or prohibition, in the way that a metaphor like ‘red lines’ does.

Photo by Naomi August on Unsplash

Decoding the Directive: What is it?

The Directive fits the bill as a set of ‘mandatory guardrails’. The Directive is not legislation, but it does have binding effects. It is a ‘mandatory policy instrument’ that applies to automated or partly automated decision-making by most federal government institutions in Canada. It establishes processes to guide AI development and deployment, without drawing red lines to prohibit unacceptable outcomes or uses of AI. It takes a ‘risk-based’ approach, with a set of requirements that apply in a graduated way, depending on the level of risk posed by an automated decision-making system.

The Directive classifies risks in four ‘levels’ from lowest to highest:

● Level I decisions have little to no impact, with impacts that are reversible and brief.

● Level II decisions have moderate impacts, with impacts that are likely reversible and short-term

● Level III decisions have high impacts, and often lead to impacts that can be difficult to reverse and are ongoing

● Level IV decisions will likely have very high impacts, and often lead to impacts that are irreversible and perpetual

Government institutions conduct an impact assessment, which both serves to document design and planning decisions, and to assist in working out which risk level a system falls under. The results of these assessments are public: very different to the results of applying the NSW AI Assurance framework.

One of the most appealing things about the Directive is that it is short, simple and very clear. It’s a pithy 15 pages (double spaced) — a fraction of the length of Europe’s AI Act, which runs to over 450 pages[JB1] . It deals only with government use of ADM, and is not a ‘horizontal’ regulation for AI use generally, so it doesn’t have to cover so much ground. Still, the difference in style, and commitment to brevity, is stark. The Directive’s main substance is set out in plain English in two key provisions, supplemented by appendices at the end of the document. Clause 6 covers requirements for government automated decision-making. Clause 7 deals with consequences for non-compliance. The appendices, which are the best thing about the Directive, clarify (with tables!) how each requirement applies, depending on the level of risk posed by a system. The rest is made up of legal and administrative necessaries, done about as painlessly as possible. Junior lawyers could study the Directive as an example of elegant legal drafting.

So how does it work?

Let’s have a look at some of the ‘guardrails’ in the Directive, how they might work, and what their limitations might be. The Assistant Deputy Minister of the relevant government department, or their delegate, is responsible for meeting the requirements. Failures to meet requirements may have meaningful consequences for the responsible person. We’ll illustrate by imagining how the application of these requirements might have impacted the Robodebt system. Taking a leaf out of the Directive’s book, we’ll try to do it all in a table!

The Directive in practice.

Navigating boundaries: ‘guardrails’ vs ‘red lines’

The Directive, in steering government institutions towards fairer, more transparent and more accountable use of AI certainly meets the ‘guardrails’ brief. The Directive’s requirements, as indicated in the analysis above, work together to guide, rather than prohibit what can and can’t be done with AI. The Directive and its process-based approach have much to recommend them. The requirements are simple, and the Directive is clear and user-friendly. Though our legal systems are different, Canada and Australia share a common-law heritage, and there are many similarities in the structure of our government and public service. Australian government agencies don’t have a practice of issuing binding Directives of this kind, but a policy, a guideline, or a regulation, might achieve a similar effect.

A key point of attraction is that the Directive starts with the use of AI by government: an area where regulation is likely to be least controversial. Few would disagree, after Robodebt, that government use of AI and ADM must be more effectively managed and regulated. And, since the Directive only deals with government, it bypasses the need for a drawn-out legislative process (that may be one reason why it is such a neat and tidy document). It is telling that Canada’s draft bill on AI regulation for the private sector has faced considerable delays, and has not yet been able to pass through parliament, leaving Canada’s government to issue voluntary codes on private sector use of AI in the interim.

The other advantage of starting with rulemaking for government AI (rather than AI in general), is that it avoids difficult policy questions about balancing ‘AI safety’ against innovation. The Directive doesn’t risk chilling investment in AI because it is solely focused on government agencies. An approach based on the Directive is therefore likely to avoid pushback from business and private users of AI about restricting innovation due to ‘red tape’… at least for now.

In the meanwhile, the ‘guardrails’ for government could operate as a regulatory sandbox, permitting the government to learn from experience before regulating more broadly. Indeed, the Canadian government has reviewed and amended the Directive three times in 3 years, which shows the agility of using a policy instrument, rather than legislation, to test and iterate AI regulation. (It also showcases the Canadian government’s commitment to good regulatory practice following technological developments — something that Australia would also do well to emulate. The last public review of the Directive added the peer review mechanism described above, and references to generative AI that were accompanied by a guidance document.)

Photo by Roger Bradshaw on Unsplash

The question the Australian government, and perhaps the Expert Group on AI, will need to answer, though, is whether ‘guardrails’ are enough. Most of the Directive’s requirements rely on disclosure, explanation, oversight, testing and other similar mechanisms to achieve goals such as fairness, accountability and transparency. There is a venerable tradition of process-based regulation, and kinds of processes contemplated by the Directive are likely to create strong pressures to develop and deploy AI and automated decision-making responsibly. It is easy to imagine the Directive having a meaningful effect by creating a series of overlapping nudges.

And yet, the Robodebt Royal Commission Report showed that the government continued to pursue Robodebt, long after it was apparent that it was unfair and probably illegal. Despite the cliché, sunlight is not necessarily the best disinfectant, and nudges might not change the course of a government agency that is heavily committed to its path and motivated by cost-savings.

There is another issue. Aren’t there some uses of AI (no matter how transparent, how well-overseen, how robustly tested) that we might not, as a nation, want to accept? If the object or outcome of a system is fundamentally harmful and unfair, knowing that the system was developed transparently and with good data is cold comfort. Or to return to our ten pin bowling metaphor: it is all very well to put up ‘guardrails’ to stop gutterballs, but maybe there are some balls that should not be launched down the alley in the first place. We have to decide whether to rely on process-based regulation, or to incorporate an approach more akin to ‘product-based’ regulation, with rules about the nature of the final artefact and not only the processes used to develop it.

This is where we confront the limits of a metaphor like guardrails. Is there enough flex in the idea of mandatory ‘guardrails’ to include a concept of prohibited uses of AI? Or do we need to also bring ‘red lines’ into the conversation? Europe’s AI Act prohibits certain applications of AI, including various forms of real-time biometric identification, overbroad social-scoring, and certain kinds of manipulation. It also imposes a positive risk management requirement for ‘high-risk’ AI systems. Providers are not permitted to put these systems into use until risks have been evaluated, and reduced to an acceptable level. It imposes product-based requirements on top of process-based ones.

Of course, deciding which risks from AI are acceptable, and which are not, is incredibly challenging. It is the kind of exercise that engages deep policy questions about rights, safety, efficiency, public interests, innovation, social justice and a whole range of other issues. Until those conversations run their course, the Directive is a great blueprint for a starting point for AI regulation. Whatever its limitations, a simple, clear, process-based set of requirements for one of the highest risk uses of AI (government decision-making) would be a huge win for safe and responsible AI in Australia. But it should be the beginning, and not the end, of Australia’s journey toward effective AI governance.

--

--