Overview | Responsible Innovation in Canada and Beyond
To read the full study, please visit the main document here.
This report examines the rapidly changing world of technology ethics and features recommendations for improving the social impacts of technology across industry, academia, government, and society.
The study includes:
- Case studies — successes and failures of efforts to improve social impacts (including a timeline of AI regulation in Canada and labour market implications of automation)
- Global efforts toward common principles for improving social impacts of technology
- Best practices for improving the social impact of technology from the perspective of designers, developers, policymakers, educators, users, and the general public
Myriad stakeholders shape the ethical landscape of technology. Designers, developers, policymakers, and investors have a direct impact on the technologies they commercialize. End-users also make small daily decisions intentionally or unintentionally about the ethical and safe use of technology — whether to enable location services on a new app, set up two-factor authentication, when considering a carbon-free transportation alternative, etc.
Ethical technology in Canada currently lacks a holistic assessment that includes the input of the general public, regulatory bodies, and private sector participants coming from different technology domains.
This paper draws on in-depth interviews with individuals and organizations working at the intersection of technology and social outcomes, and synthesizes shared considerations, challenges, frameworks, and best practices for improving the social impact of technology.
Cited Reasons for Social Impacts Work
- Mitigating potential unintended consequences of new technology solutions
- Understanding new and possibly dangerous intersections between technology and human behaviour
- Avoiding entrenching existing inequalities and working to undo inequalities for the future
- Keeping pace with technological change
- Creating shared value and shared-equity solutions
- Diversifying technology-related decision-making and development
Global Efforts to Improve Social Impacts
Organizations around the world have proposed common principles for improving the social impact of technology. Several overarching frameworks (such as the European Union’s mandate for Responsible Research and Innovation) have established core principles for ethical technology development:
Anticipation — Consider the potential adverse effects a technology could have at all stages of its life (design, manufacture, distribution, business-to-business distribution, use, and re-use) in an up-front assessment.
Inclusion and Diversity —Diverse voices in all stages of a technology’s lifecycle (investment, hiring, design, prototyping, assessment, deployment, and use) will improve the social outcomes of that technology.
Justice and Fairness — To ensure the success of diversity and inclusion, seek to mitigate issues that have disproportionately impacted or silenced particular groups.
Interdisciplinarity and Collaboration — Interdisciplinary collaboration improves technology’s outcomes. Include engineering, computer sciences, law, social sciences, and policy in the design and implementation of technology. Partnerships between sectors can improve regulation and deployment. International collaboration can facilitate effective regulatory responses to technologies.
Self-Awareness and Reflexivity — Consider one’s own position, perspective, and background, and how that is impacting decision-making. Listen better to consultations and anticipate others’ needs and perspectives.
Agency and Choice — “Opt-in” technology solutions that employ meaningful informed consent can provide individuals greater agency over the technologies they use.
Bringing Ethics into the Innovation Lifecycle
Pundits argue that widening of the innovation lifecycle in the process of creating a new technology could improve social impacts. This means extending the concept of innovation beyond design, prototyping, and assessment.
Qualitative, Holistic, and Participatory Assessments
Technology Assessment (TA) explores the relationship between science, technology, and society. It brings together researchers from different disciplines such as business, economics, sociology, or biology.
TA has significant presence in Europe, but North American interviewees also lead various TA activities, such as “Participatory TA,” which is a consultation method that involves a wide variety of stakeholders.
Certifications, Standards, and Quantifiable Assessments
Numerous “top-down” assessments include the following:
- Canadian Algorithmic Impact Assessment tool
- National Standard of Canada and CIO Strategy Council’s “Ethical design and use of automated decision systems”
- Life Cycle Assessment or ISO 14040.
Certifications, standards, and assessments provide important checklists for companies and regulators to ensure that their products are meeting social and environmental guidelines.
Pragmatic Ways to Improve the Social Impacts of Technology
The innovation lifecycle includes a wide variety of stakeholders:
- Regulators, investors, innovators, adopters, consumers, etc.,
- Specific groups and individuals not directly involved in the technology’s production or adoption, such as academics, consultants, not-for-profits, etc.
- An innovation process that implicates land rights or privacy rights brings these rightsholders into the picture
When asked which stakeholders are responsible for the social impacts of tech, the vast majority of study interviewees resoundingly stated, “Everybody.”
However, complex networks of actors and the limitations in cross-border enforcement makes a widely shared responsibility for technology’s social impacts challenging.
Therefore, it is important to identify tools and best practices that can ensure transparency and accountability through actionable steps.
Turning Responsibility into Action
Agenda-setting — Ethical technology is seeing increasing engagement by activists and implicated communities, including cyberactivism, consumer activism, lobbying, technology collectives, and ethical tech-focused hackathons
- Different types of collective pressure, both from dedicated activists and from the broader public, has had successes in enacting reforms against the negative social impacts of technology
- An example is the progress in removing terrorist content from social media, especially after the Christchurch shootings in New Zealand
Digital technologies can result in negative social impacts, but these technologies are tools that can also be used for collective efforts to share information and coordinate organization for community causes.
Public engagement — Market researchers, regulators, designers, and other parties can engage the public to help design and assess a product and disseminate information about its safe and ethical use. Best practices include
- Early/upstream engagement, establishing clear goals, including diverse and underrepresented voices, careful facilitation method selection, iteration, and a willingness to change a project based on feedback
Interviewees working to improve the social impacts of technology frequently referred to consultation and “upstream engagement” as essential components of anticipating and mitigating the negative social impacts of technology.
Policy and regulation — Either government-led or market-led measures to improve technology’s social impacts.
- Decisive, clear, and enforceable government-led regulation is critical
- Market-led responses need to guard against “ethics-washing”
Interviewees noted that government responses to technology are more reliable than market-led responses in times of resource constraint or crisis.
While innovators can have many competing priorities, mandatory policy and regulation can help clarify what is legally required, and what is a “nice to have.”
In 2018, the federal ethics committee tabled its report on Canada’s private sector privacy law, highlighting important issues related to AI ethics, including the implementation of federal government measures to improve algorithmic transparency in Canada.
In Canada and in many jurisdictions around the world, at a high level, policy formulation follows a general, cyclical process that spans five stages, which can take considerable time before coming into effect.
Education and training — Proactive education and training is needed in the following categories:
- General public (cyber hygiene)
- students (ethics in engineering)
- industry (inclusion and diversity training)
- government (familiarity with automated decision-making)
Research organizations also play a role by discussing ethical technology in conferences, journals, and grants.
There are indications that governments are increasingly well-prepared and informed in their discussions of technology-related issues (in contrast to previous efforts). Advisory committee members recognized the challenge of keeping the public sector up-to-date on the nuances of emerging technologies and recommended that governments maintain networks of subject-matter experts from a variety of sectors to help inform regulatory efforts.
Technology solutions for good — Privacy-enhancing technologies (PETs), open source tools, or carbon sequestering technologies all seek to solve existing problems and uphold the common ethical principles, including consumer agency and choice.
The unprecedented impact of the COVID-19 pandemic has heightened Canada’s awareness of technology-related challenges, along with the importance of creating a robust, resilient, and just system for technology.
In the same way that policy and regulation can “raise the floor” of innovation by creating universal standards for innovators to comply with, companies, organizations, and individuals can “raise the ceiling” of innovation with new technology that addresses existing problems or enhances quality of life.