Ethical Innovation: A summary of a Whiteboard@SonicRim session
A group of twenty professionals from the fields of design and social sciences gathered at SonicRim’s San Francisco studio recently for an open discussion session about the need for tech companies to establish an “ethical foresight” practice to be mindful of the social impact and consequences of their innovations.
As technology spreads into the most intimate aspects of our lives, overlooking possible consequences of our innovations can lead to dire social impact. Disruptive innovations can cause harm at a mass scale and quickly. Our capacity to do harm can outstrip our capacity to mitigate the harm. It has become clear that the need to establish “ethical foresight” practice has now become an imperative.
The group shared a belief that it is more prudent to avoid making poor ethical decisions than it is to attempt to fix them after the damage is done. We discussed some of the challenges involved.
Cultural challenges relate to beliefs and norms in the organization. In general, organizational culture rewards cooperation and compliance, and frowns upon constructive skepticism. Raising ethical red flags is often seen as irrelevant to the concerns of the team or the interests of the organization: the bottom-line and gratification of consumer desires. Consequently, people who bring up ethical considerations may be seen as making it harder for others to be efficient and productive.
Incentive structures lead to conflict with ethical considerations. Foremost are pressures related to shipping the products, leaving little time and energy for considering ethics. Organizations tend to reward work that appears to produce outputs efficiently; there are thus incentives to
not question the decision-making process or delay actions. As a result, employees face a dilemma between thriving in the organization, and becoming ethical watchdogs.
Finally, organizations and people underestimate their ethical responsibility.
To the extent that ethics is considered a job responsibility, it often refers to well-bounded and known acts of ethical violations. We might consider these baseline ethics: not lying, not taking bribes, not acting for personal gain at the expense of others, and so on. Even for organizations that have an explicitly articulated principle to do no harm, “harm” can be left undefined. The primary way of expanding the definition of harm appears to be by discovering negative consequences after the fact. There is little foresight involved; few, if any, organizations actively explore potential negative consequences.
While many academics explore these issues [examples here, here, and here], their suggestions for fixing these issues are not always appropriate to the contexts of practitioners in industry. Unless there is explicit cooperation between academia and industry, this is unlikely to improve.
Ethical violations have greater consequences in networked environments. Networks can cause the harm to expand to a much larger group of people. Since organizations that provide platforms for networking face the challenge of knowing how far and fast the impact of their actions spreads. This makes it harder for them to ensure ethical behavior of people they don’t employ (their customers and users).
How can organizations begin to tackle these challenges? The group discussed some practices they use or have seen in use.
The group discussed a variety of practices and tactics for supporting ethics considerations;
- Educate people on what “ethical” means by seeking out and telling stories of the impacts of technologies
- Use peer pressure dynamics to normalize ethics discussions
- Prototype failure: tell imagined horror stories of things going wrong
- Hire more kinds of people
- Make it part of the mission to do more good, not just to avoid doing evil
- Recognize that figuring out ethical courses of action is difficult, and set expectations appropriately on how quickly ethical issues are resolved
- Build ethics into contracts: make it so doing the right thing (clearly defined), is legally binding
- Applaud saying “no” for ethics reasons; incentivize and measure this behavior
- Create measures for ethical risks (which appear later) to supplement ROI measures (which apply to the near future)
- Create standards and metrics for profiling an organization’s commitment to ethical innovation
- Create an organizational ethics board that performs reviews and gives guidance
Domain of responsibility
The group discussed some ideas for expanding our sense of responsibility by expanding who we consider to be stakeholders. In addition to users and customers, whose well-being should we care about?
- Marginal populations
- People in other countries
- People in relationships with our customers and users
- Future humans
- Non-human beings
By bringing these kinds of people into view during our innovation processes, we have a chance to consider how our actions might affect them.
While our short discussion barely scraped the surface of these issues, it has triggered interest in continuing the conversation.
As next steps, SonicRim plans on creating tools for individuals and groups to grapple with these issues: some promising ideas include consequence-mapping canvases, metrics, creative and structured imagination failure prototyping activities.
Send us a note if you’re interested in partnering with us, and thank you for your attention.