How can we trust AI?

Kenny Wong
Verbz.ai
Published in
9 min readDec 14, 2019

In my last post, I discussed how lack of trust in AI driven solutions was an obstacle to adoption. So how do we build trust? We trust someone if we have a relationship with them and understand how they think, react and recall their past decisions. We develop trust if there is some bond of obligation and expectation between you both. There’s elements of trust if each party is aware of the values, culture and morality of the other.

Trust is the social contract that allows our communities to interact and function in the face of uncertainty.

Rachel Botsman talks about trustworthiness distilling down to four traits;

  1. Reliability — are you dependable to do as you say?
  2. Competency — are you able to do as you say?
  3. Integrity — do you say what you mean and mean what you say?
  4. Empathy — do you care how your choices affect others?

We can apply this way of thinking to inanimate objects and even services. We trust a tool like a hammer to perform it’s designed purpose because we’ve been recommended the item by a trusted source such as a friend or review site. Alternatively, we are reassured we have recourse to the hardware store we bought the power tool from. Understanding how something works and being able to predict how it will behave is vital to how comfortable we are to rely on it for our needs.

Translating these values and principles to apply to AI has been the subject of extensive ongoing research (See Further Reading below). In short, much like the traits we look for in an individual, we want AI to be Responsible, Fair, Stable and Trustworthy.

How do we apply these principles so as to design Responsible AI? To build trust in AI we need to address the underlying concerns and incorporate them into the design itself. Excuse the rambling notes that follow. It’s as much a trail of crumbs tracking my own journey of discovery in this rapidly changing landscape.

Five aspects to building trust in AI

Drawing from the thinking of PWC, EY, the European Commission and IBM; here are five aspects needed to build trust in AI;

  1. Explainability — the ability to understand the reasoning behind each individual prediction.
  2. Lawful — cognizant and respectful of the relevant laws and regulations.
  3. Transparency — the level of mathematical certainty behind predictions.
  4. Robustness — how stable and resistant to tampering of either model of datasets.
  5. Ethical — beyond managing bias in the model; abiding to social norms and values.

Explainability & Interpretability

DARPA sees explainable AI as the next wave of AI development and have initiated The Explainable AI (XAI) program to promote it. The program seeks to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

For businesses, the benefits of an explainable AI are both Accountability and “Audit-ability”. Automated decisions can be reviewed for the line of reasoning and weightings used so as to be able to refine and predict potential gaps.

For the data scientists and designers, explainable models allow insights into which models work best for a given use case and how to refine it.

For regulators charged with consumer protection, being able to understand the models builds trust in the results being equitable and transparent. UK’s Information Commissioner’s Office has initiated consultations on regulation mandating explainability.

Being accurate is not enough to build trust. Being explainable includes factors such as;

  • Breaking down the process of how the algorithm reached decisions.
  • Alerting users to be mindful of the Unknown Unknowns of the algorithm.
  • Understanding the factors that determine the outcomes.
  • Being clear on how the algorithm is learning via the rules and patterns being built.

Consider the level of explainability at each stage of the modelling process. Bahador Khaleghi has a good series of articles on this;

Pre-modelling

  • What are the relevant features of the data set?
  • Summarise and visualise the characteristics of the data set. What are the criticisms and prototype examples?
  • Balance the amount of engineering of features in the model that will improve accuracy but may impact explainability.

Model development

  • Explainability is best incorporated by design at the outset. The choice of models need to fit the underlying data so as to be performant whilst understood by the intended audience.
  • Be mindful of the tradeoff between performance and explainability or consider using one of the newer models that attempt to deliver both.

Post-modelling

  • The majority of research within the field of Explainable AI is towards pre-developed models focused on predictive performance.
  • With numerous approaches being developed, Khaleghi proposes assessing them around their common structure, organized around four key aspects:
  1. Target, what is to be explained about the model;
  2. Drivers, what is causing the thing you want explained;
  3. Explanation family, how the explanation information about the drivers causing the target is communicated to the user;
  4. Estimator, the computational process of actually obtaining the explanation.

However, as DARPA explains; currently there is a tradeoff. The more explainable the system, the model risks being simpler and performance may be compromised. Different AI applications will require varying levels of accountability and transparency. A fashion recommendation engine and a cancer detection program will demand quite different sets of standards. Explainable AI models such as decision trees, regression algorithms and lists of business rules whilst not simple are inherently easier to follow.

Until new pre-developed models deliver both performant and explainable outcomes; explainability will come at a price. There are Hybrid models such as BagNets that wrap explainability around Black Box AI whilst Joint Prediction & Explanation models such as Teaching Explanations for Decisions (TED) framework marry each prediction with the underlying rationale.

Helpfully, there are myriad tools to assist in explainability;

Lawful

Abiding by the same set of laws and regulations as the customer seems an obvious statement. The creation and acceptance of a set of best practices and standards in AI design and operation would be both proactive and foster collaboration between AI designers and users without the hardness of regulatory oversight which has yet to reach any form of consensus.

Striving to be lawful as an AI service extends to Data privacy and Governance. Since data is helping the AI make better decisions, the providers of the data need to have clarity on how the data is being handled, stored, used and exposed. Europe’s General Data Protection Regulation (GDPR) provides a framework protecting basic rights to one’s data and it’s portability.

It’s an interesting side note to see the initial pushback by large tech corporations against AI regulation has been replaced in recent years with calls for government oversight ( Amazon, Facebook, IBM and Microsoft as Facial recognition examples). I question whether the aim is truly altruistic and visionary or whether the strategy underlying is to both avoid heavy handed outright bans and to fortify the marketplace against new entrants. Onerous regulation and compliance requirements are easier for well funded incumbents to manage than disruptive startups trying to break into the market, especially if they have a seat at the table in the drafting. For example, in 2018, Alphabet spent over USD$21M lobbying the Federal government on Internet related issues. This Bloomberg article shows Google’s peers are not far behind. If regulation isn’t crafted evenhandedly and avoiding an industry centric stance, it risks stifling innovation and the widespread benefits AI could bring.

Transparency

The appropriate level of transparency will depend on how critical the application. The aim is not to expose lines of code which might tick the box on being transparent but leave customers no more wiser without the training data.

Consistent with the desire for transparency in building trust in AI is the presence of Human Agency and Oversight. AI operations should empower human decision making and support their fundamental rights. AI design needs to consider the benefits of having a human in the loop or in command even if it affects absolute performance.

Harvard Business Review discussed a study where too much transparency led to adverse outcomes in exam assessments. The study highlights some concerns on transparency;

  • Algorithms are proprietary and intellectual property form part of the competitive advantage so businesses will be reluctant to reveal them.
  • Limited exposure to regulators shifts the security burden to them but this may be unacceptable and untenable as AI use cases extend into increasing parts of consumer decision making. We don’t want regulatory oversight and access to all parts of our lives.
  • Technical transparency can be gamed by both opportunists (eg. seeking loopholes for advantage) and adversarial (eg. aiming to degrade the service) attacks.

Robustness

Software communities identify and expose vulnerabilities and shortcomings in traditional software services on an ongoing basis. Fixes are proposed and deployed to limit the impact on user confidence in the systems. AI services need similar levels of reliability to address commercial and security concerns from customers. Adversarial attack can arise from different vectors. For example, training data could be compromised with doctored samples that weight towards certain outcomes or introduce noise that degrades the fidelity of results. In the speech recognition space where Verbz is exploring, the spectre of fake voices is driving research into such protections as digital watermarking the neural networks themselves.

Firms like IBM and PWC offer frameworks to build a solution with Responsible AI in mind.

Ethical

The Australian Government recently released their aspirational guidelines for AI Ethics Principles which cover much of what we discussed. They are best digested in conjunction with international consensus work such as the OECD’s Principles on AI adopted by 42 countries in May 2019 and the G20’s Human centric AI Principles adopted in June 2019.

In brief they are;

  • Human, social and environmental wellbeing — Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
  • Human-centred values — Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness — Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security — Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety — Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability — There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
  • Contestability — When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
  • Accountability — Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Ethical AI design will need to operate within acceptable social norms and values to sustain human engagement and acceptance. This Scientific American article captures the dilemma of balancing power with responsibility;

“Consistency is indispensable to ethics and integrity. Our decisions must adhere to a standard higher than statistical accuracy; for centuries, the shared virtues of mutual trust, harm reduction, fairness and equitability have proved to be essential cornerstones for the survival of any system of reasoning. Without internal logical consistency, AI systems lack robustness and accountability-two critical measures for engendering trust in a society. By creating a rift between moral sentiment and logical reasoning, the inscrutability of data-driven decisions forecloses the ability to engage critically with decision-making processes.”

AI is easily capable of making decisions based on longer time frames and larger datasets than humans consider. This misalignment of time frames and contexts can lead to perceived unfairness or bias in AI driven decisions that affect its human clientele.

The West has not been alone in considering the ethical implications of AI design. In May 2019, a group of Chinese institutions and corporations led by the Beijing Academy of Artificial Intelligence (BAAI) released their own set of AI principles.

Conclusion

In covering the five aspects to building trust in AI it becomes clear the process is an ongoing one. AI design must be purposeful to both business objectives and stakeholder expectations. As community demands and acceptance change, governance of AI will need to be agile to maintain system integrity. Adapting to the ever changing threat of adversarial attack will be as necessary as monitoring for unintended bias and having the capacity to act swiftly to retain commercial relevance and reputation.

Further Reading — AI Design Principles

Originally published at http://blog.verbz.ai on December 14, 2019.

--

--

Kenny Wong
Verbz.ai
Editor for

Startups Helper | Optimist | Co-founder at Verbz.ai where we help you get more done.