Tracking Your Responsible AI Journey

Sean Singer
GAMMA — Part of BCG X

--

By Sean Singer, Leesa Quinlan, and Steven Mills

As companies around the world increase their investments in AI, the risks of AI system failure grow in tandem. Organizations are beginning to recognize both the risks these failures pose and the upside potential of proactively taking steps to address them. But as organizations move towards implementing Responsible AI (RAI) programs, there is no one-size-fits-all approach. When organizations choose to commit to an RAI program, they must continually assess whether the program is delivering meaningful results. Waiting until an AI failure occurs is the wrong strategy, potentially harming users and putting companies at risk of financial, legal, or reputational damage. Instead, companies must establish Key Performance Indicators (KPIs) that provide a common language to first track and guide implementation, and then to continuously assess program effectiveness.

KPIs play three important roles. First, and as with workplace safety, they can serve as leading indicators that measure an RAI program’s ability to prepare for, prevent, or predict potential risk. Just as a short response time to a factory worker’s hazard report demonstrates managers’ prioritization of safety, a product team’s ability to quickly engage with RAI experts or respond to risk assessments indicates an organizational commitment to RAI processes.[1]

Second, KPIs play an important role not just in ensuring that the program is working, but also in articulating to both internal and external audiences the value of RAI efforts. Building support for continued resources and investment is critical for ongoing program success, and KPIs provide a tool for demonstrating the return on organizational investment.

Finally, KPIs provide leaders with a clear view of how Responsible AI is evolving at all levels of an organization. Are leaders bought into the strategy? Are product teams utilizing the available tools? Are trainings effective? KPIs can show where progress is being made and where more resources and attention are needed. The repeated failure of project reviews to identify risk and suggest potential mitigation strategies may indicate that the review process requires additional adjustments. As with all transformations, the development of an effective RAI program will take shape in phases, with KPIs evolving in parallel as an organization’s RAI capabilities mature. For example, the education and training goals for creating a network of RAI subject matter experts (SMEs) will be different than those for training the entire workforce in Responsible AI.

Leveraging BCG’s experience in guiding organizations both on RAI journeys and in broader transformation efforts, we have identified five KPI categories that Responsible AI leaders should track: leadership commitment, program adoption, training & workforce, culture, and program effectiveness. Each of these categories addresses mutually reinforcing aspects of RAI implementation.

Figure 1: Five Responsible AI KPI categories

Leadership commitment: Responsible AI is, more than anything else, a cultural transformation that starts at the top. Senior leadership should emphasize the importance of Responsible AI in both word and deed. Whether by engaging the workforce in open forums, supporting and participating in RAI trainings, or recognizing internal RAI trailblazers, leaders must ensure that individuals at all levels of the organization feel fully supported in the adoption of RAI.

Program adoption: The more product teams integrate Responsible AI tools into their standard development lifecycle, the better. While tool utilization may be low at first, it should grow over time as RAI takes root. If not, leaders should identify and address implementation roadblocks. Once an organization has formalized its project review and monitoring processes, the use of KPIs to track the percentage of products undergoing review will be a strong indicator of adoption. However, because AI systems learn and change continuously throughout use, the risks of system failure do not vanish post-deployment. The proportion of systems being monitored for Responsible AI risks should therefore be measured on an ongoing basis to highlight process effectiveness across the entire product lifecycle.

Training & workforce: The workforce — both AI developers and users alike — plays a critical role in implementing Responsible AI, ensuring that systems are developed and used responsibly. Empowering employees to be effective change agents is critical for successful implementation and requires significant investment in bolstering RAI literacy among the workforce. Some organizations might focus on creating a cadre of RAI SMEs to support product teams, while others might prioritize developing, piloting, and scaling trainings for key functions. Eventually, all employees engaging in product development, testing, deployment, and use will need to own key RAI policies and processes. KPIs that track the share of the workforce with Responsible AI training can measure progress towards that goal.

Culture: Having Responsible AI principles is meaningless if the workforce doesn’t understand how to interpret or apply them. At the end of the day, RAI implementation is a cultural change in which each employee must be fully engaged. Surveys can be helpful in measuring the degree to which attitudes are shifting and the workforce is embracing Responsible AI. Communications such as newsletters, microsites, and blog posts can also help drive cultural change within an organization.

Program effectiveness: Having Responsible AI leadership, governance, and tools in place is a sound first step but ultimately meaningless step if the RAI program does not create tangible impact. To ensure that the program creates ROI, a senior RAI committee should meet regularly to provide guidance and oversight for high-risk use cases. Typically, a well-functioning program will flag RAI issues throughout the product lifecycle. If more issues are found post-deployment, then there is good reason to question the RAI program’s effectiveness.

The specific KPIs an organization chooses will vary based on internal structure, industry, and Responsible AI maturity. But the basic measurements needed to assess progress are consistent. And while a declaration of principles or policies is the first step of a company’s RAI journey, true success depends on a combination of top-down leadership and bottom-up commitment to execution. The development of an adaptable and agile RAI strategy relies on having a clear picture of how effectively an organization embraces new tools, processes, and concepts over time. This is exactly the picture the right Responsible AI KPIs can provide.

[1] For more on leading indicators see, “Using leading indicators to improve safety and health outcomes,” OSHA (June 2019).

--

--