Social Failure & 21st Century Design

Applied ethics crucial to realizing the benefit of new innovations

Mairead Matthews
Jan 28, 2020 · 6 min read

Canada Research Chair Jason Millar is an engineer and philosopher who studies social and ethical issues related to new innovations in technology. Below is an overview and discussion of some of his most recent work.

Following its product launch in 2013, Google Glass saw two years of poor sales and prior to being shelved officially in 2015. Alongside other social and ethical considerations, critics were concerned about personal privacy — most notably, that Google Glass gave users the ability to seamlessly record private conversations and interactions with others, as well as the ability to employ facial recognition software.

St. George’s Hospital Medical School designed a new computer program to the screening of medical school applicants in 1979. By 1988, St. George’s had been found guilty of racial and gender discrimination in its admissions process: based on historical data, sourced from a time when the school had openly discriminated against certain groups of applicants, the computer program had been designed accidentally to reiterate discriminatory human biases.

March 2018 marked one of the most high-profile involving an autonomous vehicle to date. The US National Transportation Safety Board (NTSB) in November 2019 that the collision had resulted from a series of decisions by Uber ATG, an organization which according to the NTSB had failed to make clear the abilities and limitations of its vehicles. Federal regulators have since been called upon to establish a formal review process before allowing companies to test automated vehicles on public roads.

Photo by on

In each of the cases above, individuals responsible for the design and deployment of new, innovative technologies failed to consider the full spectrum of social and ethical implications including, but not limited to, justice, bias, fairness, interpretability, explainability, control, power, gender, privacy, discrimination, truth, and equality (Millar, 2019).

St. George’s Hospital Medical School failed to consider the ethical implications of using biased historical data in their admissions process; Uber ATG failed to establish clear lines of responsibility and accountability before testing near-driverless cars; and Google failed to consider personal privacy in designing Google Glass.

With both an engineering and ethics background, Canada Research Chair Jason Millar is uniquely positioned to perform cutting-edge research in this area. Studying the various ways designers and engineers tend to overlook the ethical and social considerations of their work, Millar has found ethical and social analysis crucial to realizing the benefit of many new innovations like machine learning algorithms, driverless cars, and robots.

Baked into the practice of engineering is an in-depth understanding of the various ways materials and mechanical systems in technology fail: corrosion, erosion, fatigue, and overload, just to name a few. In engineering, these breakdowns are referred to as failure modes, generally classified as either material or mechanical in nature. From this body of knowledge, engineers have been able to develop an effective list of tools, codes, standards, risk assessments, and other best practices aimed at preventing future material or mechanical failures in engineering and design.

Alarmingly, Millar has found existing approaches to ethical analysis to be somewhat out of step with new and emerging risks. That is, unlike with material and mechanical failure, there are no universally accepted tools, codes, standards, or risk assessments aimed at preventing social and ethical problems related to AI, automation, and autonomous robots (though there have been ample efforts to establish a common set of to guide decision making around autonomous and intelligent systems). In response to this finding, Millar has in turn developed a thoughtful set of tools and techniques for engineers and designers to incorporate into their daily practice, three of which are listed and explained below.

At the core of his research, Millar argues that in addition to being able to fail materially or mechanically, new technologies may also fail socially: social failure occurs when an artefact’s design conflicts with the accepted social norms of its users or environment to the extent that its intended use is prevented or diminished (Millar, 2019). In other words, products and tools may be designed in such a way that they transgress fundamental social norms and ethical expectations, ultimately causing their benefits to go unrealized. In line with this argument, Millar has begun compiling a list of common social failure modes for engineers and designers to use in creating tools, codes, standards, or for risk assessments.

In hopes of establishing a practical way to conduct ethical analysis in engineering and design, Millar and his team at the University of Ottawa’s Canadian Robotics and Artificial Intelligence Ethical Design Lab () are developing for designers and engineers to use in their daily practice. These worksheets are intended to guide engineers and designers through a process Millar calls value exploration. This process first seeks to identify the full range of stakeholders involved in the development of a given technology, along with their respective values, and then explore any existing value tensions that may need to be addressed during the engineering and design process.

One common example of value tension occurs in the context of automated decision-making systems. While some stakeholders may value transparency and the ability to understand how algorithms behind automated decision-making systems work, others may value intellectual property rights and the ability to keep valuable and proprietary information private. In this content, value maps and other kinds of worksheets may assist designers and engineers in identifying the right amount of transparency and IP protections needed for their products.

Other tools developed by Millar are much more specific to their intended applications. For example, Millar developed a tool to evaluate automated ethical decision making in autonomous robots, such as autonomous vehicles, virtual assistants, or social robots. Millar sought to develop a tool that was user-centred and proportional in its approach, that acknowledged and accepted the psychology of user-robot relationships, that helped designers satisfy the principles contained in the human-robotics interaction (HRI) Code of Ethics, and that helped designers distinguish between acceptable and unacceptable design features (Millar, 2016). The result was for engineers, designers, and policymakers when evaluating automated, ethical decision-making systems.

The Government of Canada developed its own tool in 2019, which was a series of questions designed to help public service employees assess and mitigate risks associated with deploying an automated decision system. Interestingly, Canada was the first country in the world to develop this kind of procedure.

As new technologies and new applications for existing technologies emerge over the coming years, it will be vital to continue to develop and perfect practical tools for ethical and social analysis in engineering and design.

Works Cited

Government of Canada. Algorithmic Impact Assessment. 2019.

Levin, Sam and Wong, Julia Carrie. Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. March 2018. The Guardian.

Lowry, Stella and Macpherson, Gordon. A blot on the profession. 1988. British Medical Journal.

Millar, Jason. An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars. 2016. Applied Artificial Intelligence — Vol. 30 Issue 8, p787–809.

Millar, Jason. Biography. University of Ottawa.

Millar, Jason. Data is people! Ethics Capacity-Building to Overcome Data-Agnosticism in AI. 2019. University of Ottawa.

Millar, Jason. Jason Millar, Social Failure Modes in Technology — Implications for AI. March 2019. Centre for Ethics.

National Transportation Safety Board Office of Public Affairs. Inadequate Safety Culture’ Contributed to Uber Automated Test Vehicle Crash — NTSB Calls for Federal Review Process for Automated Vehicle Testing on Public Roads. November 2019. US National Transportation Safety Board.

Naughton, John. The rebirth of Google Glass shows the merit of failure. July 2017. The Guardian.

Originally published at on January 28, 2020.

Digital Think Tank by ICTC

A future-focused, non-profit think tank for the digital economy.

Mairead Matthews

Written by

Mairead Matthews is a Research and Policy Analyst at the Information and Communications Technology Council of Canada.

Digital Think Tank by ICTC

The Digital Think Tank by ICTC is the research and policy arm of the Information and Communications Technology Council (ICTC).

Mairead Matthews

Written by

Mairead Matthews is a Research and Policy Analyst at the Information and Communications Technology Council of Canada.

Digital Think Tank by ICTC

The Digital Think Tank by ICTC is the research and policy arm of the Information and Communications Technology Council (ICTC).

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface.

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox.

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store