AI Trust & Transparency: a new crop of startups emerging to form an AI Trust & Transparency stack

Jennifer Jordan
Sep 28, 2019 · 7 min read

Headlines are full of the ways in which our use of data and automated decision-making algorithms have run amok, from blindly optimizing into unintended consequences, to extrapolating for populations not represented in the data, to replicating and even amplifying existing biases:

· YouTube’s algorithm makes it easy for pedophiles to find more videos of children — June 2019

· Human Genomics Research Has A Diversity Problem — March 2019

· Women’s Pain Is Different from Men’s — The Drugs Could be Too — March 2019

· Amazon’s controversial facial recognition software can’t tell the difference between men and women or recognize dark-skinned females, MIT study finds — January 2019

· Police use of Amazon’s face-recognition service draws privacy warnings — May 2018

· Will Using Artificial Intelligence to Make Loans Trade One Kind of Bias for Another? — March 2017

· Amazon Doesn’t Consider the Race of Its Customers. Should it? — April 2016

· Google’s algorithm shows prestigious jobs to men, not women — July 2015

· Racist Camera! No, I did not blink. I’m just Asian! — May 2009

At the same time, the push to regulate this technology is heating up. California’s Consumer Protection Act is set to go into effect, multiple cities and a few states have adopted legislation limiting the use of facial recognition technology, and the “Algorithmic Accountability Act” proposed in the U.S. Congress would require that companies be able to explain the technologies’ decisions to consumers.

Corporations are increasingly aware of the risks. In PWC’s 2017 Pulse Survey 75% of CEOs indicated that potential for bias and lack of transparency were impeding their adoption of AI applications. In 2018 Microsoft and Google became the first publicly traded US companies to include an explicit risk factor related to AI in their filings with the SEC. Each warned that they would use AI applications in their businesses and that the use of these applications could result in unexpected outcomes or decisions that the companies could not explain. Still, a year later, according to Boston Consulting Group, only 38% of CEOs report having a clearly designed strategy for AI implementation.

The risks of implementing AI technologies are at once material and ethical. Wells Fargo was fined $175 million pricing mortgage loans for people of color higher than loans to whites with similar credit profiles, because the optimization model used for scoring credit biased zip code over the individual’s credit history. Two auto insurance companies were fined $40 million for a similar mistake where zip code outweighed driving record resulting in people of color being charged higher rates for driver’s insurance.

How do we provide companies with the data, tools, and methodology they need to adopt predictive and prescriptive technologies with higher assurance of quality, transparency, and accountability? Major tech companies and consultancies are just beginning to respond to the challenge. In the past year, IBM, Microsoft, and now Google have all introduced python developer tools that take some of the commonly applied techniques for assessing bias in data sets, evaluating fairness, and explaining models, and make them available open source. However, to use them requires some degree of expert knowledge. In addition, these open source kits only address one portion of the problem — fairness and interpretability.

To address the risks inherent at each stage of the workflow for developing, deploying, and using AI commercially, we need to invest in a new crop of solutions from startups which will create a dynamic and continually evolving “stack” of technology for trust and transparency, similar the stack that we’ve seen develop in cyber security.

We expect this stack will include tools for managing privacy, permissions, and tracking provenance of data, evaluating training data for bias, applying tools to avoid bias, assess fairness and increase interpretability during model building and tuning, tracking hardware and model combinations, and providing an audit trail of the entire process, along with monitoring, analysis, management, and remediation capabilities for post deployment.

Figure 1 below shows the typical workflow for development of artificial intelligence models. It is important to note that each step is iterative and that even before beginning its important to think through and capture intent, expected outcomes, and potential biases and ramifications.

Figure 1:

AI Trust & Transparency Applications are engaged throughout AI Development

Diagram of AI Development Flow with AI Trust & Transparency Tools
Diagram of AI Development Flow with AI Trust & Transparency Tools
AI Development Flow with AI Trust & Transparency

We may not be able to anticipate “unknown unknowns” but a thorough process with a diverse team can help ensure we are starting with the right questions, have good data, and are doing our best to anticipate the “known unknowns.”

Industries such as financial services, insurance, healthcare, and pharmaceuticals — already heavily regulated — have compliance practices around data privacy, decision explanation, and audit trail. These industries, and European companies under Article 22 of the GDPR privacy regulation are required to be able to respond should a customer request an “explanation” of a decision about them. But widespread deployment of models into the wild demands that we think beyond “explanation” and into the complexity of managing and maintaining them and the potential impact of their decisions.

Michael I. Jordan’s April 19, 2018 Medium article, Artificial Intelligence — The Revolution Hasn’t Happened Yet, provides a striking illustration of the complexity we face. He tells the story of going when his wife was pregnant to visit the obstetrician for an ultrasound. The doctor looked at the output of the machine and told them they were at high risk to have a baby with Down’s Syndrome. She recommended they go get an amniocentesis — a procedure with a 1/300 risk of damaging the fetus.

Michael had an advantage being a professor of data science and a pioneer of artificial intelligence at UC Berkeley. He dug into the model and the hardware. What he discovered was shocking but all too common. The machine that was used for their ultrasound had a much higher image resolution than the machine that created the images when the model to predict Down’s Syndrome with ultrasound was developed. In other words, the manufacturer of the device had upgraded the hardware to a higher resolution camera, but the model had not been revisited to make sure it would be compatible with the hardware upgrade.

The model and the hardware had become unsynchronized in the wild to potentially catastrophic effect for multiple families, who might have received a false positive assessment of the likelihood their baby would have Down’s and been sent to get an unnecessary and very high-risk procedure. Not only does this create a real material risk for the physician and the medical device manufacturer, it creates an enormous ethical impact for the families on the receiving end of the model’s output.

Managing these highly pervasive predictive systems in the wild will become increasingly complex when we introduce autonomy and enable multiple systems to interact with each other. It’s not difficult to imagine a similar unhinging happening with a robot on an assembly line, a drone, or even your autonomous automobile.

As a result, trust and transparency in AI is not likely to be solved simply with open source developer tools for assessing bias, fairness, or interpretability. Renewed research investment to advance artificial intelligence, proliferation of AI companies and projects, and widespread commercial adoption, will create an environment of layered complexity — to some extent we are already there –, requiring a stack of tools to emerge, much like we have seen emerge for cyber security, to develop, audit, monitor, analyze, and manage these applications from inception to implementation.

In Figure 2 below, we see that in Cyber Security, a deep rich stack of technology now exists to address threats at the application, network, endpoint, and cloud. These developed over time across a full spectrum of what is needed to assess risk, detect and monitor it, remediate it, and/or provide accountability.

Figure 2:

Cyber Security Technology Stack

Chart of Cyber Security Technology Stack
Chart of Cyber Security Technology Stack
Cyber Security Technology Stack — Manelo Manjon, “A framework to help make sense of Cyber Security Tools,” Network World, June 4, 2015

The stack and the startups for Trust and Transparency in AI are emerging now to address the material risks inherent for enterprises adopting predictive applications. Venture capitalists need to lead the way in funding the strongest teams with the strongest technology they can find to address this first wave of Trust and Transparency solutions. In Figure 3 are just a few of the contenders we have identified within the stack:

Figure 3:

AI Trust and Transparency Stack Startups

AI Development Work Flow with AI Trust & Transparency Flow

Over the next several months, we will release a series of brief video interviews with entrepreneurs building the AI Trust and Transparency stack. We’ll talk with them to find out more about what their customers are doing today to manage AI — what’s working and what’s not. And, we’ll learn how their solutions are helping to solve the challenge. Join us for the series, Conversations at the Intersection of AI & Ethics: Data and Tools for AI Trust and Transparency.

More From Medium

Also tagged Ai Ethics

Also tagged Artificial Intelligence

Also tagged Artificial Intelligence

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade