Responsible AI: Leading by Example

BCG GAMMA editor
GAMMA — Part of BCG X
8 min readFeb 3, 2021

--

Authors: Sylvain Duranton and Steven Mills

BCG is deeply committed to its role as a leader in Responsible AI. But this isn’t just talk. The guidance we provide to our clients reflects the Responsible AI principles and practices BCG adheres to internally.

BCG is in the business of helping companies solve their most challenging business problems. As we work with companies to find solutions, which increasingly involve artificial intelligence, we adhere to a clear set of values that are core to who we are as a company: delivering solutions with integrity, respecting individuals, and always being mindful of the social impact of what we do. These values are as applicable to our AI solutions as they are for any of our other client support. This is why we’ve made a commitment to ourselves and to our clients to develop and operate AI systems that integrate human empathy, creativity, and care to ensure that they work in the service of good while achieving transformative business impact.

At BCG GAMMA, we believe that competitive advantage resides at the intersection of data science, technology, people, and deep business expertise. This potential can be realized only when AI is woven into processes and ways of working — all done responsibly and with humans at the core. We knew, though, that doing so required more than just words. So when we began our journey to build a Responsible AI program inside BCG GAMMA, we set out to truly transform how we create AI solutions.

The “6+1” Responsible AI Principles

Our Responsible AI program is built on seven principles, which we describe as “6+1.” The “+1” refers to our commitment to design systems that put humans at the center of AI, empowering and preserving the authority and well-being of those who develop, deploy, and use these systems. This central principle binds together the other six principles:

Rather than devise an entirely new set of principles to guide our work, ours are consistent with existing principles and guidelines developed by IEEE, OECD, and other organizations. Since many of our clients already embrace the work of these organizations, our Responsible AI program is broadly applicable to the diversity of the clients we support. While we have embraced these organizations’ guidelines, we have also shaped our principles to align with our broader corporate values and our GAMMA purpose, making them uniquely BCG.

Walking the Responsible AI Walk Inside BCG GAMMA

Our next step was to translate our principles into action by transforming how we build and deploy AI systems. And thus, the BCG Responsible AI program was born.

The program we created had to be efficient and reflect the realities of the work we do. GAMMA delivers hundreds of fully scaled AI solutions a year, supporting every major industry vertical in more than 50 countries. Given the size and scope of our work, our Responsible AI implementation had to be low friction for our teams, scalable, and applicable across industries and cultural contexts. None of these practical considerations, however, could be at the expense of program effectiveness.

As we considered implementation, we knew that minimizing the number of new processes was important. This would simply adoption, keeping with our goal of being low friction for our teams. Gamma had already invested significant resources in creating Delivery Excellence (DevEx), a set of processes and governance that ensures each team maintains the highest standards for software code and technical solutions. Rather than create a new, parallel process we were able to integrate Responsible AI into the existing DevEx structure.

Project Assessment: Independence with Oversight

Given the broad range of AI solutions we provide and the number of projects we deliver, we wanted to create a structure that would allow teams to execute independently and largely self-administer — while adhering closely to our Responsible AI principles and policies. By giving our teams independence with oversight, we would enable them to move quickly within the context of the region or country and industry in which they operate.

To streamline the process and create consistency across teams, we created Rate.AI, a web-based project-assessment tool. This digital tool is structured as a series yes/no questions, with every “no” response flagging a risk to be mitigated. In assessing a project, the team considers such factors as project maturity (early prototype in development environment vs. scaling enterprise wide), severity of potential harm (annoyance to individuals vs. potential physical harm), and scale of potential harm (tens of people vs. millions of people). This self-administered assessment is updated throughout the project, validated by Responsible AI experts, and reviewed by leaders on a regular basis.

As part of the DevEx process, each AI project is subject to quality control reviews by an independent team that sits outside the project. This “X-Ray” team provides support and expertise to bring out the best in our teams and ensure high technical standards are maintain. To reinforce their role as a key team resource, we now provide X-Rays with specialized technical training in Responsible AI topics. They can help teams directly or, when necessary, flag issues that require further escalation, deeper expertise, or additional discussion.

Governance & Monitoring: Trust but Verify

The ability of our project teams to self-administer is important for scalability and agility, but it requires that a clearly defined system of checks and balances are in place to provide external oversight. At BCG GAMMA we have created a multidisciplinary Responsible AI committee, chaired by our Chief AI Ethics Officer. Committee members are drawn from leaders in the company’s AI practice itself (data scientists, developers, and engineers), legal affairs, marketing, and in-house experts in digital and technology ethics. Overall, the committee guides each team’s implementation of the Responsible AI program. It also serves as a source of expertise when issues emerge and must be discussed, such as when risks need to be evaluated and mitigation decisions made.

The committee reviews each team’s initial self-assessment. It then participates in ongoing project reviews and tracks each project over time to make certain that all risks are effectively mitigated. A committee member is assigned to each high-risk project to ensure clear communication between the committee and the team.

The overall program establishes clear accountability to Responsible AI principles and practices at all levels of the organization, from the Chief AI Ethics Officer, to the executive responsible for the project, to the project leaders and data scientists executing the technical work.

Tools and Training: Empower and Enable

It is very easy to tell teams to follow policies — for example, to make sure machine learning model outputs are fair or that a solution will not have an adverse effect on the environment. But it can be technically challenging for a team to actually make that happen. It is, therefore, incumbent on us to provide our teams with tools that enable them to follow these policies in a consistent manner across all projects.

To date, we have developed several software packages that support our teams in this regard. And, as part of our broader commitment to Responsible AI, we have released several of these as open source to the global data science community. The packages include:

Responsible AI Toolkit: Toolkit supporting common Responsible AI functions including identifying data bias, model bias, and proxy variables. The package ensures consistent approaches across projects and simplifies team implementation.

CodeCarbon: Developed in collaboration with Mila, a world-leading AI research institute in Montreal; Haverford College in Pennsylvania; and Comet, a meta machine learning platform. CodeCarbon helps developers understand the environmental footprint of AI software and optimize their compute to reduce the footprint.

GAMMA FACET: FACET was designed around the leading Python package scikit-learn to help human operators understand advanced machine learning models, then use the models to make decisions that save money, maximize yield, and retain customers.

An important part of our implementation is to continually increase the Responsible AI literacy of our workforce. To that end, all BCG GAMMA new hires participate in an introduction to Responsible AI. We have also rolled out Responsible AI technical training to our entire worldwide GAMMA team. And we have created self-paced tutorials that team members can independently access to learn about specific technical aspects of Responsible AI (e.g., data bias, proxy features), and which they can refer to in the future.

The Impact of Responsible AI on BCG GAMMA Culture

Our commitment to Responsible AI has driven a true cultural change within BCG GAMMA. Discussions about mitigating project-level risk have led to robust discussions among leadership. We have seen discussions that begin on a very technical, granular level elevate into broader dialogs on ethical issues (e.g., Responsible AI within personalization). At the project level, teams are proactively reaching out to Responsible AI leadership to discuss potential risks as well as how to think about broader issues such as fairness and bias in machine learning. Data scientists are now much more apt to come forward and ask for technical training, just as team members are more willing to reach out for access to expertise and resources.

Across the organization, BCG GAMMA has deepened its commitment to developing AI solutions that deliver the greatest business impact while respecting individuals and working for the good for society. We truly believe AI can do tremendous good in the world and we are deeply committed to making that happen. That is why we have created a program called SIGMA, according to which we dedicate a portion of our capacity to social-impact work. From preventing childhood disease to tackling climate issues, our teams are actively focused on making the world a better place.

We are committed to delivering all our work in a responsible manner. Our actions have gone far beyond just talking about Responsible AI or creating a set of principles. We have truly transformed how we build AI solutions and hope that, in doing so, we have demonstrated a realistic path for others who want to achieve the same goal.

We will continue to be open about our progress to ensure that BCG GAMMA AI projects are conducted in a socially responsible manner, and we encourage others to do the same. Only then will we be able to learn and grow together as a community. None of us have all the answers, but together we can ensure that AI systems work in the service of good while fundamentally transforming business.

--

--