The Mission of Responsible AI

Ruth Yakubu
Microsoft Azure
Published in
3 min readApr 3, 2023

AI innovations are growing at a rapid pace. The breakthroughs are not just coming from large enterprises, there are amazing AI developments coming from startups or individuals as well. Regardless of how large or small the source is, one fact remains the same, AI systems do not always function as intended and have the potential to cause harm. This is evident in news headlines or growing public scrutiny when AI systems miss expectations. The question is, are there standards or guidelines to ensure the systems do not cause harm in society or are trustworthy? As a result, there’s an increasing demand for government regulations on AI across industries. Common areas of concern are whether AI systems treat people fairly; respect people’s security and privacy; or provide transparency. During the machine learning lifecycle, some of the critical factors that would impact the model’s behavior are not fully assessed in the model development process, which can lead to undesirable outcomes. This not just affects society, but also the reputation of the organizations or developers of the AI systems. That’s why Responsible AI is essential.

Microsoft has created Responsible AI principles to govern how machine learning models are designed, build, and tested. These core principles are to ensure that the AI systems that are developed are fair, inclusive, reliable & safe, secure, transparent and accountable. To put these principles into practice, the company has formed governance teams that are comprised of several compliance teams to make sure best practices in responsible AI are being evaluated, adopted and implemented from senior leadership to engineering teams internally.

Implementing a Responsible AI strategy is a challenge many organizations struggle with. As a result, Microsoft has standardized the Responsible AI practices and made them available for other companies or machine learning professionals to adopt in designing, building, testing or deployment of their AI systems. For instance, customers or developers can leverage the responsible AI impact assessment template to help identify the AI systems’ intended use; data integrity, any adverse impact to people or organizations; and how it addresses goals of each of the six core responsible AI principles: Fairness, Inclusiveness, Safety & Reliability, Accountability and Transparency. In addition, this fosters a practice for AI developers to take accountability and be able to provide transparency to end-users on what the AI system does; how it should be used; its limitations/restriction and known issues. This helps machine learning teams evaluate their development lifecycle approach to validate that they are not overlooking factors that could cause their AI solution not to behave as intended.

Finally, the company has been a key contributor to research and open-source tools to empower developers and organizations with the tools they need to discover and mitigate issues that would cause models to perform in a responsible manner. For building machine learning models, data scientists and AI developers can access the Responsible AI dashboard available in Azure Machine Learning, which is built on leading responsible AI OSS tools for debugging machine learning models. The company has taken measures to ensure that the Azure Cognitive Services are not used for harm if they fall in the wrong hands, so some of the services have limited or restricted access. Organizations must submit an application to be fully vetted before they can use selected AI services. This is to ensure that developers and organizations have access to tools or use the services in a manner that does not cause threats to human rights; discriminate certain groups from getting life opportunities; or risk of physical or psychological injury.

--

--