Insights from our AI-based decision making roundtable v.1

Mike Reiner
Oct 21, 2020 · 5 min read

DataSeries |VRT131020

In October, DataSeries, an OpenOcean led initiative hosted a Virtual Roundtable about “AI-based decision making” together with Jennifer L. Schenker, the founder of The Innovator. This also led to an article that you can find here: AI-Decision Making: State Of Play And What’s Next



The majority of corporations are not ready for sophisticated AI implementations:

  • The main cause is poor data management and infrastructure
  • Corporations should spend about 80% of their time putting into place the digital foundations for AI implementations

Data (schema) standards and structure needed

  • “If there is one thing that we could do to save hundreds of billions of dollars every year, it’s to start with the standardisation of a data schema. When the mobile Internet was created, a lot of effort went into putting standardisation into place, resulting in a Global System for Mobile Communications (GSM). “We need the equivalent of a GSM for AI,” says Vishal Chatrath

Hybrid systems often work better

  • Not only combining machine learning approaches with traditional systems, but also keeping the human in the loop to make better decisions.
  • Example: Secondmind achieved 35% improvement in efficiency by using a combination of Gaussian Process-based probabilistic modelling and decision-making machine learning libraries. The technology suite is adept at quantifying uncertainty, identifying operational trade-offs and explaining outcomes using sparse and low volume data, capabilities that meet business decision-making demands where other machine learning techniques like Deep Learning struggle.


Merging Rules-Based Systems And Machine Learning

  • When you are dealing with multi-modal messy data and more complex problems it is generally agreed that machine learning is the better approach. But it is far from perfect. If new regulations come into play or the rules of the past no longer apply, ML, which has been trained on historical data, has no clue what to do — or even to recognize that the context has changed.
    Example, all of PayPal’s transactions now use a combination of ML fraud detection and explicit business rules to identify some very specific security concerns
  • You can use logic-based programming known as mathematical optimization to deal with complex logistical issues such as rescheduling. This approach — which has been around for decades in operations research — can be combined with AI-based predictions

Knowledge Graphs

  • Knowledge graphs provide another way to start emulating the implicit functions of the human mind and combine it with the computing power of machines to represent meaning by putting data into context, similar to the way humans connect pieces of information to reach a conclusion. They are being used in Alexa and Siri voice assistant devices and in Google searches and they are starting to be applied in different industries, such as pharmaceuticals, chemicals R&D and oil and gas
  • Example: accelerated and better way of doing R&D. Leveraging the data pipeline for both internal and external sources (e.g. patents) researchers can start building a hypothesis based on a much bigger and broader input than any human could possibly read or have present in his mind. To help the researcher even further, AI could be used to infer knowledge from what is in the graph.
  • Opportunity for Explainable AI: In some -but not all -cases, adding knowledge graphs to a combination of ML and rules-based systems can help companies explain why an AI made a particular decision, helping resolve serious social, legal, and ethical concerns. By looking at the knowledge graph and the rules you can give an explanation for a decision like why a loan was turned down: it was because your credit history was bad and your revenue was insufficient and so forth

Decision Intelligence

  • New types of approaches — including the social sciences — may need to be introduced into ML models. That’s where decision intelligence comes in. The term — which made it into Gartner’s 2020 hype cycle — refers to an emerging engineering discipline that augments data science with theory from social science, decision theory and managerial science to try and provide a framework for best practices in organizational decision-making and process for applying machine learning at scale. Gartner has developed a Decision Intelligence Model to help business executives identify and accommodate uncertainty factors and evaluate the contributing decision-modeling techniques.

Transformer-based Sequence Models

  • According to Reza Khorshidi Transformer-based neural sequence models, which have shown tremendous advances in natural language processing, have the best opportunity for success. If these models can be tweaked to accommodate the multimodal nature of data, he believes “it will have the ability to go beyond language, beyond health, beyond finance and pretty much cover every real-world data generating process,” he says, helping to transform business as we know it.
  • Example: Electronic health records are sequences of mixed-type data such as diagnoses, medications, measurements, interventions and more that happen in irregular intervals and are routinely collected by health systems.
  • The breakthrough is tied to the ability to build complete electronic health records that include what health systems have been routinely collecting, as well as social, economic, environmental and lifestyle data that have been shown to be important (and predictive) for health outcomes. A sequence model’s ability to learn the key patterns and relationships/dependencies underlying such complex sequences will enable health systems’ ability to anticipate things before they happen, and intervene when needed. “This can enable better, cheaper, faster processes, and ultimately pave the way towards redesigning and re-imagining the system,” Khorshidi says. The same approach could also be applied to other sectors, such as finance and retail
  • Dealing with sequential data that is mixed-type multimodal and happens in irregular intervals gives machines the ability to deal with any sort of data
  • If not just medicine but other industry sectors want to move to a world of high dimensional data in which the data is inputted in a messy way, there are not many solutions out there,” says Khorshidi. “You need to settle on some sort of feature engineering or settle for models that can deal with data as they arrive sequentially. And that’s why I think transformer-based architectures have higher odds of success.”

“Automation is the lowest hanging fruit. We should use AI to reimagine the power of the existing base of employees and strengthen businesses by doing things differently.”