Regulating AI in Canada

ICTC’s Submission to the Office of the Privacy Commissioner of Canada’s Public Consultation

Mairead Matthews
ICTC-CTIC
12 min readJun 9, 2020

--

Commercial organizations across Canada are introducing artificial intelligence to replace and/or supplement human decision-making and analysis, which can require vast amounts of personal information in order to return promising results. Read on to join the conversation on establishing meaningful control over the personal information used in this space.

Saffron Blaze, via Wikimedia Commons

From January to March 2020, the Office of the Privacy Commissioner of Canada (OPC) held a public consultation on the topic of artificial intelligence (AI) regulation. As an Officer of Parliament, the OPC oversees and enforces privacy law at the federal level, including both the Personal Information Protection and Electronic Documents Act (PIPEDA) and the Privacy Act. This consultation, however, focused solely on AI in the context of PIPEDA, Canada’s private sector federal privacy law. One practical aspect of the OPC’s work is to provide advice and recommendations to Parliament regarding legislative reform, or in other words, potential future changes to Canadian law. To guide this advice, the OPC sought public comments on eleven legislative reform proposals, of which ICTC provided feedback for nine. This blog summarizes ICTC’s responses to some of the OPC’s questions, although ICTC’s entire submission can also be found here.

In case you’re wondering what AI means in this context, ICTC defines AI as a multi-disciplinary subject, involving methodologies and techniques from various fundamental disciplines such as mathematics, engineering, natural science, computer science, and linguistics. Over the last few decades, AI has evolved into a number of technological sub-fields such as planning, natural language processing, speech processing, machine learning, vision recognition, neural networks, and robotics. Commercial organizations throughout Canada’s vertical industries are introducing AI to replace and/or supplement human decision making and analysis, which can require vast amounts of personal information in order to return promising results. For these reasons, AI is profoundly impacting the way we use personal information, both in terms of our policies and practices, and the types of activities we use personal information for.

Nonetheless, it is important to stay prudent in our approach to regulating AI. Overregulation could have serious ramifications for innovation in Canada, limiting the potential benefits AI has to offer and hampering current efforts to establish Canada as an international leader in AI. Likewise, an inadequate regulatory response would leave individuals without the explicit tools and levers needed to protect themselves and their personal information in the context of AI. Importantly, PIPEDA might not be the right venue to conduct all of this work. We must continue to explore other methods to ensure respect for the rule of law, human rights, diversity, and democratic value in the context of AI in Canada.

Should AI be governed by the same rules as other forms of processing, or should certain rules be limited to AI due to its specific risks to privacy and, consequently, other human rights? If the latter, how should we define AI?

As with the federal government’s Directive on AIA and the GDPR, one of PIPEDA’s greatest strengths is that it is technology neutral: it doesn’t regulate specific kinds of technology but instead specific activities related to technology (e.g. the collection, use, and disclosure of personal information). This technology-neutral approach has enabled PIPEDA to stay relevant and largely effective throughout numerous revolutionary changes in tech to date, and this will be equally as important going forward. Explicitly defining AI in PIPEDA may instead render Canada’s federal privacy legislation partial to technology; new innovations could in turn render the explicit definition in PIPEDA outdated, impractical, or ineffective. For one, the explicit definition could end up being too vague, creating confusion for businesses, increasing the regulatory burden associated with other types of data use and restricting innovation. Similarly, it could end up being too specific, rendering the regulation less effective and creating loopholes in the law.

Critically, not all AI has the same type or degree of impact, even when personal information is involved. Consider, for example, AI that uses client information to assist financial advisors in giving financial advice, and AI that uses customer information to make TV programming or film recommendations on a streaming platform. These applications have varying levels of risk and sensitivity both in respect to the kind of personal information involved and potential impacts on individuals; arguably they shouldn’t be subject to the same level of regulation. A broad legal definition of AI may treat these activities as equally impactful, whereas legal definitions for particularly sensitive or high-risk activities linked to AI would allow for more granular regulation. Explicitly defining AI in privacy law could pose problems for the accuracy and effectiveness of the resulting regulation. Legislative reform should instead seek to maintain a technology-neutral approach by regulating specific activities linked to AI that feature novel forms of data processing, such as profiling, or autonomous decision making.

If Canada were to provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing, what should that right to an explanation entail?

At the most basic level, commercial organizations should be obligated to inform, and individuals should have the right to be informed, when personal information is used for automated or semi-automated decision making or to train an algorithm. This would provide individuals the basic means to effectively exercise complementary rights, such as the right to an explanation, in impactful or riskier contexts. “What is automated and semi-automated decision making?” Existing regulations, such as the GDPR, define automated decision making as “a decision that is made following the processing of personal data that has been conducted solely by automatic means, where no humans are involved in the decision-making process.” Semi-automated processing has many names in different legal texts, but similarly refers to human decisions that are substantially informed by automated advice or profiling.

In addition to the right to be informed when subject to automated or semi-automated processing, individuals should also be granted the right to an explanation — particularly where decision outcomes may be impactful or where highly sensitive personal information is used. This right could be universal or depend on other factors, such as the level of sensitivity of the personal information being used, the type of decision being made, or the severity of decision outcomes for the individual. [PS1]

With respect to implementation, regulators should strive to establish clear, practical requirements for commercial organizations — in other words, not vague, confusing, nor impossible to fulfill. Critically, not all types of AI lend themselves well to clear explanation, meaning not all organizations will be able to explain exactly why, how, and according to what variables their AI models make decisions. That said, at the very least, commercial organizations should be required to explain key attributes of their decision-making systems (whether a human is involved in decision making and how; whether it is possible to determine why, how, and according to what variables decisions are made; and what key characteristics of the training data are, including potential biases).

Would enhanced transparency measures significantly improve privacy protections, or would more traditional measures suffice, such as audits and other enforcement actions of regulators?

While applicable, traditional transparency measures (such as audits) may not provide enough clarity, emphasis, or focus to enable substantive manifestation of new rights. New measures that more robustly address AI could improve privacy protections and explicit accountability, however, additional transparency measures may also produce regulatory burden for commercial organizations and governments. All new measures should be assessed both in terms of their potential benefits and regulatory burden. Consider, for example, privacy impact assessments (PIAs), which are a structured way for businesses to identify and manage privacy risks associated with new projects.

Under current regulations, the responsibility to assess privacy impacts falls on individuals. This has proven over-burdensome and impractical in today’s complex privacy landscape. An individual is likely to have tens, if not hundreds, of privacy policies to read on any given day, and as individuals become less able and less likely to read privacy policies before providing consent, their consent is rendered increasingly less meaningful. PIAs would build on this by putting the onus to assess privacy impacts on commercial organizations, rather than individuals. As a matter of best practice, PIAs should be clear, easy to understand, easy to access, and publicly accessible; universal standards governing the layout, accessibility, and dissemination of PIAs could make them more consistent, similar to nutrition labelling today. In order to reduce regulatory and compliance burdens, the requirement for PIAs could also depend on other factors, such as the associated impact and risk.

Should Privacy by Design be a legal requirement under PIPEDA?

Privacy by Design (PbD) was developed in the 1990s by Ann Cavoukian, former Information and Privacy Commissioner of Ontario, and has since been adapted and incorporated in the GDPR. At a high level, PbD aims to ensure that privacy considerations are proactive, not reactive, and that enhanced privacy is always the default setting. A second goal is to ensure that end-to-end privacy protections are embedded in the design of IT systems and that there is no resulting trade-off between privacy and functionality. Finally, PbD requires privacy measures, risks, and other considerations to be visible and transparent to users and user privacy to be a top priority. Accountability is also a fundamental component of PbD; businesses should not just comply with privacy principles but also be able to demonstrate their compliance.

Legally underpinning PbD in federal privacy law may push more companies to proactively consider privacy and security in data collection, however, some argue that this would not necessarily ensure proper data protection. Companies could successfully argue that time and effort was spent considering and integrating privacy protections into their product or service even in situations where they fail to protect privacy effectively. Critically, the core challenge lies in testing or evaluating the efforts or processes by which organizations implement PbD. Is it possible to evaluate the adequacy of such efforts while considering that designers cannot possibly anticipate all potential problems? Would creating such bureaucratic standards or tests prevent smaller companies from participating in the digital economy? Would an approved certification mechanism be possible to demonstrate compliance with PbD requirements? These are all important considerations in deciding to legally mandate PbD.

Would it be feasible or desirable to create an obligation for manufacturers to test AI products and procedures for privacy and human rights impacts as a precondition of access to the market?

It would be ill-advised to create an obligation for manufacturers to test AI products and procedures for privacy and human rights impacts as a precondition of access to the Canadian market. Such an obligation would produce additional bureaucratic burden on government and businesses and could potentially have negative impacts on Canadian markets. Such a requirement also may not be necessary to promote the development of privacy-conscious products. Consider, for example, Article 25 of the GDPR, which mandates data protection by design but is distinctly limited in application to data controllers (and to some extent, data processors). Though not explicitly designed to apply to technology manufacturers per se, in practice, Article 25 places an indirect obligation on technology manufacturers to assess the privacy and human rights impacts of their products. This is because data controllers are legally responsible for how they choose to process data and may be more inclined to choose suppliers that enable them to fully comply with the law.

Can the legal principles of purpose specification and data minimization work in an AI context and be designed for at the outset? If yes, would doing so limit potential societal benefits to be gained from using AI?

The legal principles of “purpose specification” and “data minimization” are in direct conflict with the underlying goals and ideology behind big data and AI. Unrestrained requirement of these principles in the context of AI would likely limit the potential societal benefits to be gained from using AI. That said, with the appropriate default standards and exceptions, there may be a way to satisfy both objectives; “data minimization” and “purpose specification” could exist as some sort of default standard while allowing for certain exceptions (such as those based on consent or alternative grounds for processing beyond consent). For example, an individual could consent to their personal information being stored and used for other purposes similar to that of the original use or for other specific kinds of purposes, such as health or environmental research.

If a new law were to add grounds for processing beyond consent, with privacy protective conditions, should it require organizations to seek to obtain consent in the first place, including through innovative models, before turning to other grounds?

Obtaining meaningful consent must be the default standard before “consentless” alternatives are permitted. Such a law would need to articulate which regulatory requirements companies would have to meet before being able to process data outside of meaningful consent — for example, public filing, individual notice, or justification. Critically, cost constraints cannot be an adequate justification for bypassing consent, and organizations should have to prove that the collective benefit to society and the individual outweigh the need for meaningful consent. In terms of framing new grounds for processing beyond consent, the first step should be to assess whether protected personal information is necessary for the AI purposes. Synthetic data and differential privacy processes should be mandatory considerations before exploring the need for processing-protected personal information.

Is it fair to consumers to create a system where, through the consent model, they would share the burden of authorizing AI versus one where the law would accept that consent is often not practical and other forms of protection must be found?

“Consent versus no consent” is a false dichotomy and there are other meaningful consent models that can be explored beyond direct individual consent. As a comparison, in personal finance, individuals can employ accountants to represent them for tax and finance preparations. With consent, there is an opportunity to employ a similar model by creating a personal data protection agent role that can work on behalf of people. However, to ensure satisfactory protection of personal data, these agents should be industry certified (similar to Chartered Professional Accountants), required to stay up to date with all data privacy and consent developments, able to provide individuals with sound advice regarding consent and meaningful consent, and able to act as a proxy for individuals to grant and revoke consent.

What could be the role of de-identification or other comparable state of the art techniques in achieving both legitimate commercial interests and protection of privacy?

De-identification and other privacy techniques like synthetic data need to be looked at from the perspective of risk management. These techniques are valid, but they are simply elements of larger risk management plans and not solutions unto themselves. Risk management plans include detailed scenarios that identify the likelihood of re-identification for individuals and groups of individuals in addition to the probable level of harm if data is re-identified. Importantly, risk management plans need to be transparent and accessible so that individuals can properly assess the risk of re-identification and meaningfully object to processing by the underlying AI. Similar to the EU model, a reformed PIPEDA should incorporate the need for a Data Protection Officer, certified in the proper use and monitoring of emerging industry-standard de-identification techniques.

(If rules were established to allow for flexible use of information that has been rendered de-identified), which PIPEDA principles would be subject to exceptions or relaxation?

Any exceptions or relaxation of the rules must be dependent on the likelihood of re-identification and the significance of potential harm if re-identification occurs. For exceptions or relaxation to occur, both the likelihood of re-identification and the potential harm of re-identification should be low. Under a low-likelihood-and-harm scenario, reduced or eliminated fines for re-identification breaches could be considered, with response focused on mitigation and correction rather than financial penalties.

Is data traceability necessary, in an AI context, to ensure compliance with principles of data accuracy, transparency, access and correction and accountability? Would enhanced measures, (such as record-keeping, third party audits, and proactive inspections by the OPC), be effective means to ensure demonstrable accountability on the part of organizations?

We should, at a minimum, require AI decisions to be traceable to specific instances (in other words, versions) of algorithmic models and training data. This information should be kept and stored in an immutable database and should be detailed enough to enable past decisions to be recreated as part of the audit or review process. If robustly designed and subject to constant review, “record-keeping, third-party audits, and proactive inspections by the OPC” would provide a good foundation for enforcement. However, requiring commercial organizations to appoint a Data Protection Officer (or similar) would also provide a locus for accountability.

Do you agree that in order for AI to be implemented in respect of privacy and human rights, organizations need to be subject to enforceable penalties for non-compliance with the law?

Yes, however, the enforceable penalties must be significant enough to act as a deterrent against non-compliance with the law. The current rush to implement and deploy AI is driven by profit and funding objectives. Without meaningful and enforceable penalties, some organizations may willfully ignore compliance with the law and manage symbolic sanctions as a cost of doing business.

We hope you enjoyed this abridged version of ICTC’s submission to the OPC. To read the full consultation, please visit the full document, which is hosted here.

--

--

Mairead Matthews
ICTC-CTIC

Mairead Matthews is Manger of Digital Policy at the Information and Communications Technology Council of Canada.