Announcing Our Investment in Arthur

Kelley Mak
Work-Bench
Published in
3 min readDec 11, 2019

Bringing Trust and Transparency To ML

We’re excited to announce our investment in Arthur’s $3.3M seed round with Index Ventures and Homebrew.

Innovations in machine learning and AI algorithms, such as neural networks and deep learning, show promise of better performance and efficiency. However, the mechanics that make this possible do so at the expense of explainability and control. As machines become increasingly relied on to make decisions, it’s more necessary than ever to have the ability to unpack the reasoning behind those judgments. Arthur is the first production AI monitoring platform that gives enterprises the power to detect and protect against model issues before they become financial or reputational liabilities.

The Problem

What’s clear from our quarterly corporate peer Machine Learning Roundtables and executive dinners is that all Fortune 1000 companies, regardless of industry, are on a mission to better use data to drive their businesses. These organizations range in the stage of their ML maturity. We see some early in their journey with a centralized data science group that supports the company and helps define use cases to highlight the power of data to senior leadership. Others have moved towards distributed data science and ML talent embedded in product organizations with multiple models in production and looking to best optimize their AI stack. Despite the varying levels of sophistication, one of the most common concerns across these varied companies is building and operating trusted AI.

The use of algorithmic models isn’t new and has been powering critical business operations for years. However, innovation in AI is growing at an incredible rate and enterprises are keen on employing the latest and greatest technologies. Unfortunately, the black-box nature of these novel techniques inhibits their utilization. We’ve heard from many data leaders eager to implement the efficiencies of deep learning, yet can’t deploy the technology due to unexplainable decisioning. Millions of dollars in revenues are on the line if anything goes wrong.

Enterprises are understandably cautious in their AI operations. We’ve already seen too many horror stories of bias in AI for critical domains, whether that is criminal justice, hiring, or financial inclusion. We’ve also seen the harmful effects of discriminatory decisions hurting brands’ reputations, recently exemplified in the outrage at Apple Card’s biased decisioning by Goldman Sachs. Adversarial threats, such as evasion and poisoning, increase the risk of things going awry and has teams taking a step back to rethink the security of their models. At the same time, regulators are putting pressure on businesses by requiring better transparency and disclosure, which is even beginning to encompass, not just heavily regulated industries, but also those that require compliance with GDPR.

The Product

Arthur gives enterprises the confidence to deploy and run machine learning and AI at scale. Their centralized platform provides robust monitoring and auditing of models in production so that teams can proactively detect and perform forensic investigation of their AI stack. Customers, like the US Air Force and Harvard, are using Arthur today to detect and respond to production performance issues, bring transparency and clarity to model decisions, and collect and measure inference data to catch instances of unwanted outcomes.

The Team

This problem could not be better solved than by the founding team of Arthur, who are central figures in the conversation around trusted AI. Adam Wenchel, Arthur’s CEO, and Priscilla Alexander, VP of Engineering, saw these risks early while building the Machine Learning group at Capital One and pioneered the firm’s Explainable AI efforts. Along with them are Liz O’Sullivan, who spearheads commercial operations and has been a leading advocate for ethics and fairness in AI, and John Dickerson, a machine learning faculty member at the University of Maryland whose work spans practical problems using stochastic optimization and machine learning, including optimizing highly regulated and scrutinized use cases such as kidney exchanges.

Congratulations to the entire Arthur team!

Check out more press coverage in TechCrunch and Wired.

--

--