The Challenge of Defining Responsible AI

Sean Singer
GAMMA — Part of BCG X
5 min readMay 21, 2020

--

By Steven Mills, Maximiliano Santinelli and Sean Singer

This is the first in a series of articles exploring how companies can become leaders in Responsible AI.

Artificial intelligence offers tremendous opportunities to improve financial performance, the employee experience, and product and service quality for customers and citizens. But AI has also inflicted harm — including everything from gender and racial bias in credit, hiring, and recidivism algorithms to self-driving cars that crashed when they encountered unusual road conditions.

As artificial intelligence assumes a more central role in countless aspects of business and society, companies must find ways to develop and operate AI systems that integrate human empathy, creativity and care while achieving transformative business impact. That means looking beyond business outcomes alone. Companies must also identify potential second- and third-order effects on fairness, safety, privacy and society at large.

Responsible AI goes far beyond a narrow focus on AI algorithms. BCG GAMMA’s approach extends to every aspect of end-to-end AI systems, including data collection, governance, algorithms, and the business processes that AI systems influence. Responsible AI isn’t just about bias and fairness, although those issues are critically important. Companies must also ensure systems are safe and robust, while at the same time keeping top of mind all the potential ways they can fail.

We are not alone in weighing these issues. Concern is growing both inside and outside boardrooms about the ethical values either embedded in or missing from AI:

· Eighty-two percent of Americans believe AI should be carefully managed.

· Two-thirds of internet users feel companies should have an AI code of ethics and review board.

· Executives in ninety percent of companies believe that AI has surfaced ethical challenges in recent years

In order for organizations to appropriately address these concerns, they must have a clear understanding of the meaning of Responsible AI. In the first installment of this two-part series, we offer a definition of Responsible AI that can help organizations understand the breadth of issues they must consider. In the second installment, we focus on the value organizations can realize by integrating Responsible AI into their operating models. Developing an organizational view of both aspects is an important first step in an organization’s Responsible AI journey.

What Responsible AI Means … and What It Doesn’t Mean

This is how we uniquely define Responsible AI:

Responsible AI involves developing and operating artificial intelligence systems that integrate human empathy, creativity, and care to ensure they work in service to the greater good while achieving transformative business impact.

Alongside a clear understanding of what Responsible AI means should come an awareness of what it doesn’t mean. Responsible AI starts with embedding accountability into all levels of an organization, as well as across all stages of the life cycle of an AI system. Progress toward Responsible AI will be limited if ownership resides with only one executive, division, or team.

Maintaining human control is central to Responsible AI, because the greatest risks of AI system failure typically emerge when timely human intervention is absent. Reliance on traditional business metrics can also lead companies astray. Responsible AI demands a balancing of system performance with safety, security, and fairness, rather than exclusively prioritizing business KPIs. Finally, meeting legal and regulatory requirements must be the bare minimum for companies. The long-term societal impacts of AI systems, including both AI’s intended and unintended consequences, shape Responsible AI.

How AI Systems Fail

Much of the concern about AI focuses on numerous examples that demonstrate bias and a lack of fairness. AI systems have offered lower credit card limits to women than men, despite similar financial profiles, while digital ads have demonstrated racial bias in housing and mortgage offers.

These risks are critically important to mitigate, however a variety of other real-life issues merit serious attention, as well. Systems can include human oversight that prevents users from easily tricking a chatbot into making offensive and racist comments. Recommendation engines can be tested so that they avoid insensitively suggesting customers make repeat purchases of unusual items that most people buy infrequently — from burial urns to toilets. An algorithm that recommends cancer treatments can use real rather than hypothetical patient data to prevent the harm of inaccurate diagnoses. Reputational and financial damage can be avoided when firms unambiguously notify users that their data is being collected and may be resold in the future.

These different types of failures reflect system design, not the flaw of an algorithm. But whatever the source, the impact is measurable in more than the bottom line: Costly employee disengagement and even reputation-damaging protests can fester when companies develop AI systems that are not in line with the organization’s values, as we will explore in the next article in this series.

The System, Not the Algorithm

Responsible AI means a new way of working, one that ultimately drives growth, creates value, and establishes the stakeholder trust necessary for success over the long term. We believe that the 10–20–70 rule leads organizations to ultimate success in implementing and scaling AI. In our experience, only 10% of AI success depends on specific algorithms and 20% on the particular data and technology chosen to implement them. The remaining 70% involves the large-scale transformation and associated change management that generate the greatest impact. This is why we believe Responsible AI is about more than just algorithms: It is about focusing on the entire AI system, end-to-end.

While our definition may sound formidable, we believe that delivering AI responsibly is achievable for all organizations. And implementation does not mean missing out on the business value AI can achieve. This is not an “either/or” issue, but rather a “both/and” opportunity in which Responsible AI can be achieved while still meeting — and exceeding — business objectives.

Next up in the series, we’ll discuss the upside that businesses can realize when they put responsible AI principles into practice, as well as the risks to businesses that choose to ignore these concerns.

--

--