An ICTC Brief
Harnessing the Benefits of AI While Reducing the Harms
Submitted in response to the Office of the Privacy Commissioner’s Proposals for ensuring appropriate regulation of artificial intelligence, March 2020
The Office of the Privacy Commissioner (OPC) is correct that PIPEDA falls short in its application to artificial intelligence (AI). Commercial organizations throughout Canada’s vertical industries are introducing AI to replace and/or supplement human decision-making and analysis. At the same time, AI requires vast amounts of personal information to perform well and return promising results. For these reasons, AI is profoundly impacting the way we use personal information, both in terms of our policies and practices, and the types of activities we use personal information for.
Nonetheless it is important to stay prudent in our approach to regulating AI. Overregulation would have serious ramifications for innovation in Canada, limiting the potential benefits AI has to offer, and hampering current efforts to establish Canada as an international leader in AI. Likewise, an inadequate regulatory response would leave individuals without the explicit tools and levers needed to protect themselves and their personal information in the context of AI.
At the very least, ICTC proposes that we must clearly establish the following rights and obligations:
- A requirement for proactive and responsible disclosure around the use of automated and semi-automated decision-making systems, so that individuals may be aware of and understand the implications associated with the intended use of their data.
- The right to be informed when subject to automated and semi-automated decision-making.
- The right to access commercial organizations’ policies and practices regarding the use of personal information in automated and semi-automated decision-making.
- The right to request and access a privacy impact assessment, and a parallel requirement for commercial organizations to conduct privacy impact assessments for certain kinds of automated and semi-automated decision- making systems.
- The right to access specific information about automated and semi- automated decision-making systems, such as: the degree of human involvement in decision-making, the degree of decision traceability, and key characteristics of the training data, including potential biases.
- The right to have personal information forgotten–also known as the right
to erasure. This is particularly important given that AI may collect and use inaccurate data, or even create data about individuals based on erroneous or biased algorithms.
As PIPEDA may not be the right venue to conduct all of this work, we must continue to explore other methods to ensure respect for the rule of law, human rights, diversity, and democratic value in the context of AI in Canada.
Researched and written by Rob Davidson (Manager, Data Analysis and Research), Kiera Schuller (Research and Policy Analyst), and Mairead Matthews (Research and Policy Analyst).