From Data To Model Risk: The Enterprise AI/DI Risk Management Challenge (Part 2)

by Kamesh Raghavendra, Chief Product Officer, The Hive

We are moving artificial intelligence (AI) systems from the lab, where we can control for a limited set of variables, to potentially massive-scale implementations, where variables will propagate and multiply. We must have an AI engineering discipline that can help predict and adjust for those variables.

From single- to multilink AI/DI decisions

Early enterprise AI implementations are still quite limited and tend to be focused on single-link predictions such as:

  • “What’s the chance that this customer will churn?”
  • “What is the predicted lifetime revenue from this customer?”
  • “Which clause in this regulation has been changed since we last reviewed it?”
  • “Where are the logos appearing in this video?”

In a typical enterprise use case, these single-link AI models provide insight used by human decision-makers to determine the best next step based on this information. The decision space — the inputs and the outcomes that result from them — are well-defined and tightly limited.