Industry efforts aren’t about to disrupt tech’s ethics problem

Sara M. Watson
Berkman Klein Center Collection
8 min readApr 26, 2019

Regrets? Big tech has a few.

Through frameworks like the Ethical Operating System (Ethical OS) the tech industry is now coming to terms with its social responsibility by outlining “how not to regret the things [they] will build.” Efforts like these are challenging tech’s overly optimistic world-saving mentality, helping engineers imagine the unanticipated uses and unintended consequences of their products. But while toolkits sourced from industry may help mitigate future backlash, tech alone won’t solve the ethics crisis.

Funded by the Omidyar Network and developed with the Institute for the Future (IFTF), Ethical OS is an open source resource. Targeted at technologists, engineers, product managers, CEOs, and even board members, the framework is designed as a conversation starter.

Using foresight analysis and future signals that point to risks and ethical dilemmas tech leaders are most worried about, Ethical OS identifies eight risk zones where “hard to anticipate and unwelcome consequences” are most likely. These include Addiction & The Dopamine Economy, Machine Ethics & Algorithmic Bias, Hateful & Criminal Actors, and the Surveillance State.

Risk Zones, Ethical OS

The first exercise walks through 14 near-future fictional scenarios informed by the risk zones and signals. These prompts are designed as a “warm up” to more imaginative thinking.

Picture a world where: smart toilets subsidize free health insurance with waste data, delivery drones collect map and sensor data without complete neighborhood permission, or colleges encourage students to attain consent and share sexual preferences using a blockchain-based app.

If these scenarios sound deeply troubling, that’s the idea. These speculations help technologists break out of their “optimism bias.”

Jane McGonigal, director at IFTF’s Digital Intelligence Lab, writes in the materials, “When we say something was unimaginable, usually it means we failed to point our imagination in the right direction.” The dorm-room defense no longer holds up.

The second tool offers a checklist of questions that stem from the eight risk zones. Readers determine which risks are most relevant to their product or company.

For example, the “Surveillance State” checklist discusses potential risks of using technologies to monitor, track, and score citizens, even if the technology’s imagined use was seemingly innocuous, like social sharing. The checklist asks questions like “What could governments do with this data if they were granted access to it?”

Lastly, the toolkit presents a handful of future-proofing strategies to mitigate these identified risks. Aside from the few suggestions like flagging ethics questions in daily standups and product requirement documents, the rest of the strategies are vague provocations for more systemic change.

Ethical OS does important work synthesizing the swirling social concerns of the current moment. Speculative scenarios render the future in vivid detail, which takes ethics out of the esoteric and into the concrete.

Completing the exercises, an engineer could brainstorm hundreds of possible first, second, and third order consequences of their technologies and catalog dozens of questions to keep posted on the walls of their cubicles and in their documentation. So how does that not lead to creative paralysis?

McGonigal told me that through facilitated workshops, she helps design an “action roadmap” to evaluate the probability of these risks as well as levers companies might use to address them. But action roadmaps aren’t included in the public download at this time.

ETHICAL OS, APPLIED

A few months after its August 2018 launch, I wondered if Ethical OS was impacting practice in the industry.

Mozilla’s director of Strategic Foresight, Miriam Avery, told me that while Ethical OS isn’t formally part of their product management process, she uses it to prompt product managers to specify long-term impacts in product requirement documents. She even inserts future scenarios into longer meetings to challenge mental models across teams. That informs Mozilla’s unique social enterprise mission to “safeguard the agency of users in very different market, social, and technical conditions than today.”

Big Tech is also exploring how they might integrate similar principles into their established product development lifecycles. IFTF couldn’t name names, but they and other applied ethicists are engaged with some of the biggest tech firms, advising through workshops, brown bag lunches, and ongoing Slack channels that include product managers, engineers, and long-term planning strategists.

But can you imagine the reaction a Google engineer would get asking if there is a “version of our product that is available to use if users don’t want to sign the user agreement” at a standup meeting?

Ethical OS is better positioned to influence conversations in the early stages of development. It’s becoming integral to accelerators and incubator programs like Techstars, LAUNCH Accelerator, and Y Combinator Research. Ethical thinking can be part of the culture from founding, when questions confronting tech’s systemic issues — like exploitative business models extracting value from customer data — might actually make a difference.

Principal of Omidyar Network’s Tech and Society Solutions Lab, Yoav Schlesinger, admits that it’s challenging to track adoption of an open source framework like this. But interest and appetite within the industry are clear.

Former VP at Omidyar Network, Paula Goldman oversaw the development of Ethical OS alongside Raina Kumra, who was entrepreneur in residence at the time. As of January 2019, Goldman has joined Salesforce as the head of their Office of Ethical and Humane Use. Salesforce has shared publicly how they are integrating ethics into their specifications for when a product is considered “ready” and “done.”

The adaptable resource is already proving useful as an educational tool in classes at Harvard and Stanford. Ethical OS compliments Omidyar’s other education efforts like the Responsible Computer Science Challenge, which aims to better integrate ethics throughout computer science education.

McGonigal has also seen interest from consumer product companies exploring possibilities for the internet of things and augmented reality — names we might not think of as tech companies. Even city mayors are using Ethical OS to workshop near-future legislative challenges.

BUT IS IT ETHICS?

Early conversations about ethics in the tech community tend to get stuck circling around semantic distinctions. Schlesinger admits that Ethical OS might not fit an academic philosopher’s definition of ethics.

As he sees it, “ethical decision making is a robust process of discernment and deliberation. Ethics includes responsibility to individuals, communities, and society; transparency and accountability around the decisions that are made and the products that are built; and innovation that is fundamentally centered around human flourishing and well being.”

Amy Johnson, a Berkman Klein Fellow and scholar at the Center for Humanistic Inquiry at Amherst College, said that Ethical OS does little to address responsibility, reads more like “an endeavor to avoid liability, blame, and discomfort than to behave ethically,” and would better be dubbed “Risk OS.”

Working for ten years at Google and researching philosophy and ethics of technology at the Oxford Internet Institute’s Digital Ethics Lab, James Williams has seen the industry’s ethical questions play out both theoretically and in practice. He told me the risk-mitigation stance in Ethical OS is “representative of how ethics is viewed across industry.” But that’s a limited, legalistic approach, and it helps explain why technologists have long avoided these discussions. They imagine ethical inquiry as putting too many “constraints on action,” says Williams.

Ethical OS is written in jaunty, friendly language that’s palatable to engineers. It assumes good intent, saying “What if, in addition to fantasizing about how our tech will save the world, we spent some time dreading all the ways it might, possibly, perhaps, just maybe, screw everything up?”

Zachary Loeb, history and sociology of science PhD candidate, argues the toolkit’s language “sneaks in an ethical argument that ‘technologists’ are on the side of ‘the good.’” If tech wants to adopt an ethical mindset, he argues, it will have to “grow out of this childish belief.”

Aiming for “better product development, faster deployment, and more impactful innovation,” Loeb argues those Ethical OS’s drafters aren’t “actually interested in changing the status quo but in preserving the power and independence of the tech companies.”

CAN TECH FIX ITSELF?

Known for philanthropic impact investing, the Omidyar Network is confronting the downsides of technology from within. Most notable in their portfolio, they’ve backed the Center for Humane Technology founded by former Googler Tristan Harris who popularized the attention-economy-disrupting idea of “time well spent.”

Ebay founder Pierre Omidyar isn’t the only one throwing money at tech’s ethics problem. Founders of Craigslist and LinkedIn, Craig Newmark and Reid Hoffman, have invested in research centers, think tanks, and even investigative news organizations.

And the frameworks are proliferating. Ethical OS sits alongside an ever-expanding list of domain-specific resources on ethics, covering everything from algorithms and data science to virtual reality.

Some frameworks more thoroughly address how organizational practices might evolve. For example, UK-based think tank Doteveryone (in the Omidyar Network) just announced a step to fit into the Agile development processes. And the Markkula Center for Applied Ethics (also Omidyar funded) has developed a toolkit informed by input from product teams at Google.

Many of these resources are authored not only by industry but also by academics and policy makers as well. That’s important because mandatory trainings, values statements, and chief ethics officers are “little more than ethics vaporware” without matching changes to incentives structures.

Markkula’s director of Internet Ethics Irina Raicu told me that when it comes to ethics, “industry self-regulation doesn’t work. We need new laws to help create not just restrictions but also new incentive systems.” She added this remains “a key question, and not one that I’ve seen addressed successfully by any company so far.”

Jane McGonigal agrees: “The toolkit is definitely not meant just for an internal, proprietary rethink. The best foresight will happen when we’re bringing together civic, academic, and tech” communities into the conversation — “an ecosystem of actors.” The toolkit itself suggests finding subject experts and outside partners. And eventually the scarier scenarios touching fields like healthcare, employment, and credit will also fall into the purview of legislators and regulators.

Tech ethicist David Polgar thinks we could do with more public collaborations across industry to share best practices from applying tools like Ethical OS. That’s why he started organizing events under All Tech is Human. He hopes to encourage more case studies and workshops in the future.

What if you could be paid a bounty for identifying the social risks of new technologies, as they do for finding bugs in the security industry? What if speculative ethics scenarios were included in the industry’s infamous interview cases? Product leader and fellow at the Berkman Klein Center at Harvard Kathy Pham thinks that would “send a signal about what the company cares about.”

“If the town hall conversation changes, the lunchtime conversation changes. If in a code review someone asks, ‘when you made this assumption in your if/then statement, did you think about these other factors?’” That’s when, Pham argues, we’ll know these efforts are starting to have an impact.

There’s no doubt the industry needs to shift away from a “move fast and break things” mindset towards something more like “move purposefully and fix things,” as the Omidyar Network team likes to quip. Ethical OS itself is a future signal of the industry’s trending efforts to reflect and correct course. They just can’t build the future alone.

--

--