Responsible AI in action, Part 1: Get started

Kate B
Data Science at Microsoft
8 min readNov 28, 2023
Image generated with Bing Image Creator.

Why care about responsible AI?

There is a lot to be excited about with recent advances in AI (Artificial Intelligence) technology, but every day there are examples in the media about where and how AI has gone wrong. As AI is integrated into more of our daily work and personal lives it can cause minor inconveniences, such as mistakenly canceled appointments, to more serious issues, such as potential job displacement and privacy compromises — and may even compound already existing social or economic inequities. All of us who design, develop, and deploy AI have a responsibility to confront the risks that the technology introduces.

To help, AI regulation is coming through multiple government and industry initiatives. The European Union AI Act is the first broad regulatory framework expected to become law. And with the recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the US government is also making a strong move toward regulating AI and protecting citizens against the potential harms from AI systems.

If your team makes use of AI APIs or AI systems or designs, develops, or deploys AI and is unsure how to get started with responsible AI, you may want to consider the concrete steps in this article and the ones that follow in this series of articles on responsible AI. These articles are based on lessons learned and wisdom gleaned from rolling out a responsible AI practice across an organization at Microsoft with ownership of multiple internal tools and external applications. The approach was grounded — like it is for other product teams within Microsoft — in the principles, standards, practices, and tools that have been in use internally for several years, including:

Aligned with this guidance, this series covers:

What are some specific challenges for responsible AI?

Responsible AI practice sounds promising. Who wouldn’t want their product to be responsible or trustworthy? But there are challenges.

AI systems are complex. They require a diversity of teams, skills, and tools to design, develop, and deploy responsibly. These are teams that may not traditionally collaborate across functional boundaries, which means there are gaps in knowledge and expectations about use cases, product behaviors, and downstream impact. Furthermore, teams may not share the same terminology: for example, the term model may have different connotations for a marketing executive versus a data scientist versus a cloud architect. For additional context on some of the team and organizational dynamics involved, Microsoft Research has published a framework that may be helpful: Responsible AI Maturity Model.

The potential risks and harms of AI systems are different from those of traditional software systems. AI systems have inherently more complex workflows; process and analyze massive amounts of data; and often rely on open-source packages and libraries that can be vulnerable. And AI systems are non-deterministic and may make mistakes even when they function well. The challenge of deploying responsible AI is compounded because the tools for identifying, measuring, and mitigating risks and harms are still evolving, require domain-level expertise, and struggle to keep pace with the rapid advancement of models and algorithms. Some helpful references for learning more about risk and security characteristics of AI systems include:

A third challenge for a responsible AI initiative can be executive support. To build a sustainable practice, leadership support is crucial because it ensures the necessary resources and priority over time to be successful. It signals to an organization that responsible AI is a core business commitment. And executive support can help drive the culture change needed to embed a responsible mindset into each stage of AI design, development, and deployment.

Ready to get started?

Five tips for getting prepared for responsible AI (RAI)

The suggestions provided below are based on RAI experience for an organization with more than 1700 employees and multiple product teams delivering both internal tools and external applications. As you review the recommendations, keep in mind that they should be adapted to fit your organization, circumstances, and product plans.

#1: Establish roles and responsibilities

Like most business initiatives, RAI needs to be supported with a people plan to help make sure awareness and accountability are integrated across roles and functions. A good first step is to formalize a basic set of roles to coordinate, communicate, and track progress. Table 1, below, charts how one organization implemented roles and responsibilities. Their experience can be a pattern for tailoring your own approach. Adapt, modify, or combine responsibilities to fit the circumstances. Keep in mind:

  • Responsibilities were integrated into existing roles, except for the RAI Council.
  • For this organization, the RAI Council included experts in law, policy, ethics, engineering, and research.
  • Depending on organization and team size, not all roles may be needed.
  • A strong partnership developed across the separate roles as members worked through communication challenges, priorities, and expectations.
Table 1: RAI roles and responsibilities.

#2: Create an inventory of AI products and systems

Creating an inventory of your AI systems, products, and services is a starting point for planning RAI assessments. An inventory also benefits the business by helping document the scope and potential risk of the AI footprint. Here are some questions likely to surface when creating the inventory:

What are the criteria for the products and solutions to include in the AI inventory? Examples of criteria include:

  • Does the solution use or integrate generative AI, including LLMs and multimodal models?
  • Does the system use an upstream AI product or service?
  • Does the system or product include a Machine Learning model?

What are the criteria for prioritizing the risk? Examples of criteria include:

  • Is the product or service used for commercial purposes? Is it available externally to customers or partners?
  • Does the product use Generative AI (LLMs)? Is it embedded in downstream applications?
  • Are any of the use cases considered sensitive or restricted?

An AI inventory can be as simple as a spreadsheet with information such as the following:

Table 2: RAI inventory.

#3: Prioritize learning about RAI practice

RAI practice is new. Many team members may not understand it and may feel overwhelmed by what they need to learn. But with the right resources and support, team members can be encouraged to start on a RAI learning path.

Be prepared for some team members’ perception that AI is a technical and model-related problem and therefore responsible AI challenges are the domain of the data science team. Team members may also feel the RAI process does not apply to them, with comments such as, “It’s not really AI if the model is not making decisions,” “It’s never clear to me what’s AI and what’s not,” or “Not my AI so I should not have to be concerned about it.”

Another challenge for teams new to RAI is how to integrate it into their own software development process. As a team cycles through a few iterations with RAI and becomes familiar with principles, risks, and mitigations, RAI will slipstream into their existing development process. Here are some helpful resources for getting started:

#4: Select team members for an RAI impact assessment

At the heart of RAI practice is an impact assessment exercise. The purpose is (1) to identify gaps between system behavior and the organization’s responsible AI goals and principles, and (2) to put a mitigation plan in place to address the gaps. This practice is documented here:

Selecting team members is especially important for this exercise because the context for the assessment matters. Individuals with a variety of perspectives, experience, and disciplines need to be included. It could involve program managers, developers, data scientists, UI designers, UX researchers, content developers, security engineers, marketers, operational engineers, and stakeholders such as end users.

Questions are typically asked and discussed such as: What are the scenarios? What could go wrong? Who could be impacted? What can be done to reduce the risks and harm? Each individual may have a perspective to help identify use cases, risks, harms, and mitigations that other people might overlook. A broad reach across individuals and roles can also help create alignment and collective responsibility for RAI.

#5 Prepare a plan for RAI impact assessment

AI product teams are under a lot of pressure to move quickly and iterate. Communicating a plan in advance for an RAI impact assessment may help minimize concern about potential schedule disruptions of new RAI requirements.

This is an example of the roadmap shared with product teams to help them plan for the assessment:

Table 3: Plan for RAI impact assessment.

Wrapping up

This is the first in a series of three articles that explore how an organization can implement responsible AI. The second article will focus on how to prepare for and complete a responsible AI impact assessment.

AI systems have the potential to affect many people directly and indirectly, in both positive and negative ways. Responsible AI can help teams build and deploy AI products in ways that minimize harm. If you have the talent and passion to develop AI solutions, please consider these recommendations and join us in the commitment to innovate responsibly.

Useful references

Here are additional resources that may help:

Acknowledgments

Special thanks to Stanley Lin and Kris Bock for their collaboration on this article; it would not have happened without many helpful hours of collaboration and content.

Kate Baroni is on LinkedIn.

See the other articles in this series:

--

--

Kate B
Data Science at Microsoft

Broad experience with data, ML/AI, security. Current focus on responsible AI, genetic genealogy.