The opportunity to apply responsible AI (Part 2): Guidelines, Data Science tools, legal initiatives, and tips.

--

By Jesús Templado, Director at Bedrock

Intro

In the first part of this article we discussed the potential harm and risks of some Artificial Intelligence applications that have demonstrated immense potential across many industries. We concluded that the ability to explain each algorithm’s behaviour and its decision-making pattern is now key when it comes to assessing the effectiveness of AI-powered systems.

In this second part we will be providing some tips, tools and techniques to tackle this challenge. Likewise, we will be commenting on promising initiatives that are happening in the EU and worldwide around responsible AI. Lastly, we will comment on how responsible AI is an opportunity rather than a burden for organisations.

Technical guidelines and best practices

As professionals that operate in this field and that can be held accountable for what we develop, we should always ask ourselves two key questions:

  1. What does it take for this algorithm to work?
  2. How could this algorithm fail, and for whom?

Moreover, those developing the algorithms should ensure the data used to train the model is bias-free, and not leaking any of their own biases either. Here are a couple of tips to minimise bias:

  • Any datasets used must represent the ideal state and not the current one, as randomly sampled data may have biases since we live in an unfair way. Therefore, we must proactively ensure that data used represents everyone equally.
  • The evaluation phase should include a thorough “testing stage” by social groups, filtering these groups by gender, age, ethnicity, income, etc. when population samples are included in the development of the model or when the outcome may affect people.

What tools Data Scientists have

There are tools and techniques that professionals from our field use when they need to explain complex ML models.

  • SHAP (SHapley Additive exPlanation): Its technical definition is based on the Shapley value, which is the average marginal contribution of a feature value over all possible coalitions. In plain English: It works by considering all possible predictions by using all possible combinations of inputs and by breaking down the final prediction into the contribution of each attribute.
  • IBM’s AIX360 or AI Fairness 360: An open-source library that provides one of the most complete stacks to simplify the interpretability of machine learning programs and allows the sharing of the reasoning of models on different dimensions of explanations along with standard explainability metrics. It was developed by IBM Research to examine, report, and mitigate discrimination across the full AI application lifecycle. It is likely that we will see some of the ideas behind this toolkit being incorporated into mainstream deep learning frameworks and platforms.
  • What-IF-TOOL: A platform to visually probe the behaviour of trained machine learning models with minimal coding requirements.
  • DEON: A relatively simple ethics checklist for responsible data science.
  • Model Cards: Proposed by Google Research provides confirmation that the intent of a given model matches its original use case. Model Cards can help stakeholders to understand conditions under which the analytical model is safe and also safe to implement.

The AI greenfield requires strict boundaries

AI represents a huge opportunity for society and corporations, but the modelling processes should be regulated to ensure that new applications and analytical mechanisms always ease and improve everyone’s life. There is not any legal framework that helps to tackle this major issue, that sets boundaries and/or that provides bespoke guidelines. Likewise, there is not any international consensus that allows consistent ruling, audit or review of what is right and wrong in AI. In fact there is not even national consensus within countries.

Specific frameworks such as The Illinois’ Biometric Information Privacy Act (BIPA) in the US are a good start. The BIPA has been a necessary pain for tech giants as it forbids the annotation of biometric data like facial recognition images, iris scans or fingerprints without explicit consent.

There are ambitious initiatives such as OdiseIA that shed some light on what to do across industries and aim to build a plan to measure the social and ethical impact of AI. But this is not nearly enough because of the immediate need of international institutions to establish global consistency. If a predictive model recommends rejecting a mortgage, can the responsible data science and engineering team detail the logical process and explain to a regulator why it was rejected? Can the leading data scientist prove that the model is reliable within a given acceptable range of fairness? Can they prove that the algorithm is not biased?

The AI development process must be somehow regulated, establishing global best-practices as well as a mandatory legal framework around this science. Regulating the modelling process can mean several things: from hiring an internal compliance team that supports data and AI specialists to outsourcing some sort of audit for every algorithm created or implemented.

AI could be regulated in the same way The European Medicines Agency (EMA) in the EU follows specific protocols to ensure the safety, efficacy and adversarial effects for drugs.

Emerging legal initiatives: Europe leading the way

On 8th April 2019 the EU High Level Expert Group on Artificial Intelligence proactively set the Ethics Guidelines for Trustworthy AI that were applicable to model development. They established that AI should always be designed to be:

  1. Lawful: Respecting applicable laws and regulations.
  2. Ethical: Respecting human ethical principles.
  3. Robust: Both from a technical and sustainable perspective

The Algorithmic Accountability Act in the USA that dates from November 2019 is another example of a legal initiative that also aimed to set a framework for the development of algorithmic decision-making systems and has also served as a reference to other countries, public institutions and governments.

Fast forward to the present day, the European Commission proposed on 21st April 2021 new rules and actions with the ambition of turning Europe into the global hub for trustworthy AI by combining the first-ever legal framework on AI and a new Coordinated Plan with Member States. This new plan aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across Europe. New rules will be applied in the same way across all European countries following a risk-based approach, and an Intelligence Board will facilitate implementation and drive the development of AI standards.

The opportunity in regulation

Governance in AI, such as that which the EU is driving, should not be considered as an evil. If performed accurately, AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition. Moreover, governance would allow us to legally frame the boundaries on acceptable risks and benefits of AI monetisation while ensuring that any project is planned for success.

“AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition.”

Regulation actually opens a new market for consultancies that help other companies and organisations manage and audit algorithmic risks. Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in dozens of contexts, heads the Online Risk Consulting & Algorithmic Auditing (ORCAA), a company that was set up to help companies identify and correct any potential biases in the algorithms they use.

Counting on a potential international legislator or auditor would also allow those that achieve the “Audited player label” to project a positive brand image while remaining competitive. Using an analogy that relates to drug development: Modern society relies on medicines prescribed by doctors because there is an inherited trust in their qualifications, and because doctors believe in the compulsory clinical trial processes that each drug goes through before hitting the market.

Final thoughts

Simply put, AI has no future without us humans. Systems collecting the data typically have no way to validate the data they collect and in which context the data is recorded and collected. Data has no intuition, strategic thinking or instincts. Technological advancements are shaping the evolution of our society, but each and every one of us is responsible for paying close attention to how AI, as one of these main advancements, is used for the benefit of the greater good.

If you and your organisation want to be ahead of the game, don’t wait for regulation to come to you, but take proactive steps prior to any imposed regulatory shifts:

  • It must be well understood that data is everything. Scientists strive to ensure the quality of any data set used to validate a hypothesis and go to great lengths to eliminate unknown factors that could alter their experiments. Controlled environments are the essence of well-designed analytical modelling.
  • Design, adapt and improve your processes to learn how to establish an internal “auditing” framework. Something like a minimal viable checklist that allows your team to work on fair AI while others are still trying to squeeze an extra 1% accuracy from a ML model. Being exposed to the risk of deploying a biased algorithm that may potentially harm your customers, scientific reputation and your P&L is not appealing.
  • Design and build repositories to document all newly created governance and regulatory internal processes so that all work is accessible and can be fully disclosed to auditors or regulators when needed, increasing external trust and loyalty to your scientific work.
  • Maintaining diverse teams, both in terms of backgrounds, demographics and in terms of skills is important for avoiding unwanted bias. While in the STEM world women and people of colour remain under-represented, these may be the first people to notice these issues if they are part of the core modelling and development team.
  • Be a promoter and activist for change in the field. Ensure that your communications team and technical leaders take part in AI ethics associations or debates of the like. This will allow your organisation to be rightly considered a force for change.

All these are AI strategic mechanisms that we use at Bedrock and that allow the legal and fair utilisation of data. The greatest risk for you and your business not only lies in ignoring the potential of AI, but also in not knowing how to navigate AI with fairness, transparency, interpretability and explainability.

Responsible AI in the form of internal control, governance and regulation should not be perceived as a technical process gateway or as a burden on your board of directors, but as a potential competitive advantage, representing a value-added investment that still is unknown for many. An organisation that successfully acts on its commitment to ethical AI is poised to become a thought leader in this field.

--

--

Jesus Templado González
Bedrock — Human Intelligence

I advise companies on how to leverage DataTech solutions (Rompante.eu) and I write easy-to-digest articles on Data Science & AI and its business applications