Pallabi Sarmah
10 min readJan 10, 2022

Responsible Artificial Intelligence (AI), its value and uses…

Image source:https://go.coe.int/yhA2r

“AI is accessible and used by everyone everywhere. A responsible AI is necessary in resolving ambiguity for where responsibilities lie (Gillis, 2021). A responsible approach in AI in any organization will add value by doing the right thing for your customers, your businesses and the world (Sibo, 2021).”

1. Responsible AI: An Introduction

An Increase in the use of artificial intelligence (AI) can raise several new risks. AI systems can behave unexpectedly in a production environment as compared to the original training model. Automation could lead to fewer manual detection and correction. Gartner identifies four trends driving near-term AI Innovation (Gratner Newsroom, 2021) and Responsible AI is one among those four trends. According to Svetlana Sicular, research vice president at Gartner. “Increased trust, transparency, fairness and auditability of AI technologies continues to be of growing importance to a wide range of stakeholders”. Gartner experts predict that by 2023, all personal hired for AI development and training work will have to understand and demonstrate expertise in responsible AI (Gratner Newsroom, 2021).

Responsible AI is a governance process that considers how an organization is addressing the challenges around AI from both ethical and legal point of view (Gillis, 2021). In organizations, Data Scientists, Data Engineers and software engineers are responsible to follow a trustworthy AI standard while writing, developing, and deploying any algorithm for an AI model. The meaning and rules of Responsible AI can differ from organization to organization. Company values can offer a guideline for the appropriate use of AI where deploying AI requires careful management to prevent any damage not only to the brand reputation but more importantly to every individual and the society as a whole.

2. The Journey to responsible AI: From vision to value

The journey to Responsible AI lies behind AI Ethics. Ethical Platform for the Responsible Delivery of an AI project is one of the most important factors to consider. The SUM values: Respect, Connect and Protect are the topmost pillar of the platform, followed by FAST track principles Fairness, Accountability, Sustainability, Transparency. With these values and principles, the Process-Based Governance Framework is another building block to add value to the AI framework (Leslie D. D., 2019). Doing the right thing is not only good for the organization but also for its customers, its business, and the world. Let’s discuss why an organization should consider responsible AI to reduce the risk in the output of a machine learning model. According to Andrew Ng, data is food for AI and that is why it is important that its ecosystem must be more data centric. In a chat with Andrew on MLOps: From Model-centric to Data-centric AI, he explains how the data label inconsistency can add bias to a training set of an ML model.

AI based project team should continuously reflect, act, and justify converting the SUM values, the FAST Track Principles and the PBG Framework into practice in each stage of the project lifecycle.

A visual process diagram of three Building Blocks of a responsible AI project delivery ecosystem is given below:

Figure 1: Ethical platform for the responsible delivery of an AI project ( (Leslie D. , 2019)

The above-mentioned guide will assist an organization in safeguarding practices of responsible AI innovation. This implies that ethical platform should be taken into consideration at every step of the design and implementation workflow of an AI system. An AI project team should practise continuously to reflect, act, and justify these three principles SUM, FAST Track principles and PGB Framework (The Alan Turing Institute, 2019).

Figure 2: Putting the ethical platform in practice process cycle diagram

An organization identifies the scope of an AI project and starts considering the objectives and sets expectations and acceptance for a successful project. Once these all steps are identified AI project lifecycle starts with gathering data, preparing data, data wrangling, exploratory data analysis, building model, training, and testing model, evaluating model, deploying model, and monitoring model

Figure 3: Lifecycle of an AI project

AI tools are creating values to improve the lives of people around the world, from business to healthcare, from the financial market to education, from the energy industry to space science, from farming to the automotive industry. It is raising new questions to the organizations about the best way to build fairness, accountability, sustainability, transparency, and security into these systems.

Why is responsible AI important?

A minor change in an input’s weight may drastically change the output of a machine learning model.

1. The data used in training the model should not contain biased data. Biases are part of human nature, essential in daily life. We all are different, and our likes and dislikes are different. These biases can also be found in data. If we use these biased data in a model that will create a skewed or inaccurate machine learning (ML) model.

2. In the model development process, each step should be recorded in a way that cannot be altered by humans or other programming languages (Gillis, 2021). From data collection to data processing, analysis to modelling an analyst or a data scientist or a data engineer may come across different challenges and there is also the likelihood to introduce bias into an ML model, training set or in the analysis stage unknowingly (Walch, 2021).

3. Machine learning bias can be added while training a model with incomplete data or poor-quality data. Organizations should check the data used to train the AI or ML model. It is also the responsibility of Data Scientists to shape the data in a way that it minimizes the algorithmic and other ML biases. During the feature selections stage, it is necessary to select features carefully to ensure that they should not include any biases.

4. Adding biases while labelling unstructured data. Labelling data is part of an ML project. Lack of domain knowledge and miscommunication within the team can add some biases unknowingly during the labelling stage.

5. Imbalanced Datasets in a Multiclass classification model is another challenge that could add bias to the Model. Imbalance Multiclass datasets mean a dataset where you have more than 2 classes and the classes are not equally distributed; the sample size is imbalanced. For example, you have a dataset for an image classification model with 4 classes, Dog, Tiger, Lion and Cat. In that dataset, you have 100 images of dog, 10 images of tiger, 48 images of Lion and 120 images of cat. Most classification models have imbalanced data, however a small difference in ratio does not make much difference in the accuracy of the model. In this case, we have 10 images of the class tiger is 1:10 with the class dog. Re-sampling the dataset is a necessary step in this case. It is necessary to follow some responsible measures by removing samples from over-represented classes and adding more samples to the under-represented classes.

Responsible AI should not be the responsibility of Data analysts, Data Scientists, Data Engineers and Machine Learning Engineers. Each stakeholder including decision makers, business owners and members of the governing body of an organization and everyone involved in the data to insight process should equally be responsible for carrying out Responsible AI practices.

3. Responsible AI challenges can be solved with a tool-based approach

The classic approach of an AI-based technology is most of the time a black-box approach myth. In this case tool based MLOps can help organizations execute a Responsible AI strategy (Merritt, 2020). A tool-based approach will increase the fairness, accountability, sustainability, and transparency in each stage.

In this article, it is not possible to discuss all market-leading responsible AI tools in detail. However, I am going to cover some leading technology company’s responsible AI approaches that can guide AI developers and decision-makers of the organizations to practice a Responsible AI culture.

3.1 Microsoft Responsible AI

Microsoft has developed its own responsible AI governance framework with help from their AI, Ethics, and Effects in Engineering and Research (AETHER) Committee, Office of Responsible AI (ORA) groups and responsible AI strategy in Engineering (RAISE). Microsoft’s responsible AI includes tools that can lead an AI-centric organization from early planning and collaboration to identify errors and biases within the ML model and helps organizations assess AI security risks allowing developers to ensure that their algorithms meet the requirement of a reliable and trustworthy AI solution.

HAX Workbook is an interactive tool that helps to set interactions scenarios while designing a user-facing AI system. This tool is useful in the early planning stage of an AI project.

AI fairness checklist is a management tool of checklists for the guidance of following the principles of development and deployment of AI systems, which helps to understand organizational challenges and opportunities within a set boundary of fairness in AI.

Fairlearn tool in Azure Machine learning solutions has responsible AI capabilities of delivering AI solutions with increased model transparency and reliability. It helps AI developers to identify their system’s fairness and mitigate any negative impacts for a group of people that are defined in terms of race, age, gender, and disability status. You can find some example notebooks via this link to understand how the Fairlearn Python package empowers AI developers to measure fairness and mitigate any unfairness issues.

InterpretML tool is an open-source package that incorporates machine learning model interpretability in the same platform. Interpretability is useful for model debugging, feature engineering, detecting fairness issues, Human-AI cooperation. InterpretML can explain the entire model behaviour or individual predictions on a machine learning model. Using a visualization dashboard, it can interact with model explanations in the Jupyter notebook environment as well as in the Azure Machine Learning studio.

Error Analysis tool helps to get a deeper understanding of machine learning model errors. It helps to identify cohorts with higher errors rates, visualize the error rate distributions and diagnose the root cause of the error by diving deeper into the data and model.

Counterfit is an open-source responsible AI tool created by Microsoft which helps organizations to assess AI security risk. The details of deploying and setting up of Counterfit command-line tool can be found via this link.

3.2 Dataiku Responsible AI

According to Gartner 2021 magic quadrant for Data Science and ML platforms, Dataiku has been the leader for 2 consecutive years 2020 and 2021. Dataiku responsible AI covers three main dimensions:

· Accountability ensures that models are designed in a way that is aligned with their purpose.

· Sustainability which establishes the continuous reliability of AI-augmented processes in their operations from model maintenance to recreating processes and reusing work to increase efficiency.

· Governability by controlling the whole process centrally, managing and auditing the enterprise AI effort.

Dataiku provides tool-based services for responsible AI. Dataiku’s advanced monitoring and dedicated dashboards help administrators, IT and project managers to keep track of global activity, dataset’s size and location per repository.

Dataiku’s enterprise-level security and data governance features organize all tasks into projects with built-in documentation.

3.3 IBM Responsible AI

IBM helped people and organizations to adopt AI responsibility by a dedicated ethics board. IBM AI Ethics Board is a central body that supports and manages responsible AI by including the foundational properties of AI ethics explainability, fairness, robustness, transparency, and privacy while building any AI solutions. IBM mainly focus on the following practices to set their guideline for responsible AI:

· A human-centered approach to trustworthy AI by implementing transparent and explainable technology and AI systems.

· Data responsibility by handling data responsibility ensuring ownership, privacy, security and trust.

· Client engagement is another approach adopted by IBM Watson. More than 40,000 IBM Watson clients are engaged with the IBM deeply experienced data science and design team whose views and feedbacks are helping the IBM experts deliver trustworthy AI services on hybrid cloud for a wider range of industries.

4. Conclusion: Data to Responsible AI

Responsible AI’s importance is increasing as AI is widely used worldwide in everyday life. Responsible AI is all about ethics and that is why it varies from organization to organization. The three-pillar principles of Responsible AI, viz., SUM Values, FAST Track Principles and PBG framework should be taken into consideration in every AI-centric organization, and the AI team should reflect, act, and justify each principle throughout the whole AI lifecycle, from the data gathering stage to ML model evaluation and monitoring stage. Responsible AI challenges can be solved with tool-only approaches as well as by following some ethics-based principles. By using tool only approaches it is convenient to overcome the biases added by the model which happens due to any incompetence during the whole AI model building process. Finally, it is important to understand that everyone involved in the process of data to AI insight are equally responsible. Responsible AI is not only a data specialists’ responsibility and in reality, it’ll also have to be collaborative effort from everyone, including decision-makers, organizations management bodies, customers and society.

References:

Gillis, A. S. (2021). responsible AI. TechTarget.

Gratner Newsroom, P. r. (2021, Sepetember 7). Gartner Identifies Four Trends Driving Near-Term Artificial Intelligence Innovation.

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. . The Alan Turing Institute.

Leslie, D. D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation ofAI systems in the public sector. The Allan Turing Institute. The Alan Turing Institute. Retrieved from https://www.turing.ac.uk/.

Merritt, R. (2020, September 3). https://blogs.nvidia.com/blog/2020/09/03/what-is-mlops/. Retrieved from https://blogs.nvidia.com/: https://blogs.nvidia.com/blog/2020/09/03/what-is-mlops/

Sibo, I. S. (2021). Responsible AI: From Vision to Value. Retrieved from https://www.slalom.com/: https://www.slalom.com/insight/responsible-ai-value

(2019). The Alan Turing Institute. The Alan Turing Institute.

Walch, K. (2021, may 6). https://www.techtarget.com/searchenterpriseai/feature/How-to-detect-bias-in-existing-AI-algorithms. Retrieved from https://www.techtarget.com/: https://www.techtarget.com/searchenterpriseai/feature/How-to-detect-bias-in-existing-AI-algorithms

A Chat with Andrew on MLOps: From Model-centric to Data-centric AI — YouTube

https://blogs.gartner.com/avivah-litan/2021/01/21/top-5-priorities-for-managing-ai-risk-within-gartners-most-framework/

Responsible AI Resources — Microsoft AI

https://www.microsoft.com/en-us/ai/our-approach?activetab=pivot1%3aprimaryr5

https://content.dataiku.com/dataiku-datasheet/gartner-mq-21

https://www.ibm.com/artificial-intelligence/ethics

https://www.ibm.com/watson/trustworthy-ai

Artificial Intelligence Is Here to Stay: Let’s Make Sure It’s Responsible (reworked.co)

https://go.coe.int/yhA2r

Pallabi Sarmah

Data and AI Managing Consultant/Machine Learning and Innovation/Data and AI strategy/Responsible AI