Don’t Blame the ‘Brains’ for AI Bottlenecks — It Could Be the Lack of ‘Hands’

Artificial Intelligence (AI) is popping up everywhere. Intelligent systems are being used in many places and are quickly becoming smarter. However, AI development still faces a significant bottleneck. This bottleneck is caused not by the intelligence or ‘brains’ of the systems but by the fact that AI also needs ‘hands’ to do things.

Jouko Ahvenainen
Prifina
5 min readFeb 22, 2021

--

Photo source: Wikipedia.

AI has become a very popular buzzword over the last five years. Most company management groups and boards want to see some level of AI development in their organizations. Unfortunately, expectations and actual use cases are not always in line. The biggest problem is a lack of sophisticated machine learning (ML) and AI models that analyze data, handle tasks, and make decisions.

Let’s take a simplified AI task. A system collects data, analyzes that data, and uses it to draw conclusions and make decisions. The results are sent along for operative use. If a system is built to work around AI, like a self-driving car, the capability to analyze the data and make decisions — the “brain” of the system — can create a bottleneck. But most systems are different.

Take automating insurance claim processing using AI, for instance. This procedure follows the same steps as our aforementioned “simplified AI task”, but in this case interactions with other systems are much more complex:

  1. A policyholder fills out a claim, probably a web form, but possibly a paper form. They also have some other documents, like receipts, a report of an offense, or a medical report. Converting all these documents to a digital format may require OCR (Optical Character Recognition) and NLP (Natural Language Processing).
  2. The insurance company may collect data from other sources. For example, it can assess a person’s insurance history using information from a national database, credit rating data, criminal records, and data from other similar incidents. All of this data can be used to verify that information in the claim makes sense, is in line with other data sources, is within a statistical margin of expected behavior, and is not fraudulent.
  3. Then the system analyzes the data and makes a decision. The decision can be to pay a certain sum, not pay, or send the case onward for further investigation.
  4. When a decision has been made, the system must then send a letter or email to the policyholder, store the decision and all documents, start the payment process, and inform third parties, like the national insurance database, a health care provider, other parties in the incident, and police.
  5. After this, the policyholder might not be happy with the decision and can trigger a new process.

In this example, we can see that data analysis and decision-making is a small part of the overall process. Other requirements take up a lot of the process, especially getting data from several sources, formatting data, entering decision data to other systems, and triggering actions in different systems. What makes this even more complex is that data is typically in many different formats and may contain inaccurate or incomplete information. Even the case of a data value being “null” needs to be handled; “null” is different than “zero” and, depending on the data set, can have meaning or not. There are many handlers needed.

This kind of system was implemented by one of my own companies several years ago. Although it was a digitally advanced insurance company and environment , there was still a lot of work to be done. A typical rule of thumb in the data business is that 60% to 80% of the work relates to pre-processing data. This is reality when you try to implement AI in any enterprise that already has many existing systems, some of which can be quite old-fashioned. Just think of SAP, Netsuite, and links to banking systems.

We can even imagine a more modern solution that collects data from various wearable devices (Apple Watch, Fitbit, Withings, Garmin, Oura, etc.) in one place and converts it into a format on which you could build ML/AI solutions. Just collecting all that data presents significant challenges, even with open APIs. APIs are still not very common, and while an API will be structured, the quality of data included can vary from one source to another.

A term I’ve come to like is ‘AI hands’. It refers to tools that collect data from many old and new systems, format them in one place, and then make the processing results available for operative use in other systems. Companies often forget or ignore the development of ‘hands,’ perhaps because it is fancier to talk about the latest innovations for the ‘brains’. As always, great thinking is rarely enough; we must collect and organize information first before we can execute our grand visions, whatever they may be.

In practice, these AI ‘hands’ are like software robots (RPA) that can work with different systems and devices. These include additional software components (e.g., OCR, NLP, data cleaning, APIs) to acquire data and trigger actions (e.g., send emails, start payment, start delivery). Other useful tools include webhooks that can trigger background tasks in the serverless environment, such as verifying data and running NLP. This kind of tool allows us to work with a vast number of different systems and formats.

Using open source is often the best way to support a variety of needs, from small and rare systems to major systems. There are many data formats and unformatted data that no company can implement in its proprietary systems. Here, open source is the only option. These ‘hands’ and ‘brains’ should be based on commonly used and widely available programming languages (like Python) that help ‘brains’ and ‘hands’ work together to utilize open source components.

To advance the use of AI and ML, we need more and better ‘hands’ for AI. Management groups must also invest in these ‘hands’ if they want to implement and utilize AI. It is the same with consumer services; someone must offer solutions where data is available in a usable format, along with tools that generate results in real use-cases. In last year’s Gartner Hype Cycle, many AI solutions were on the hype peak. AI ‘hands’ are needed to improve productivity.

Connect With Us and Stay in Touch

Prifina allows you, as an individual, to bring your data from different devices and services into one place under your control. Then, you can take that data and power different applications that give you daily value, such as insights or recommendations, without sharing it with anyone.

You can follow us on Twitter, Medium, LinkedIn, and Facebook or listen to our podcast. Join our Facebook group Liberty. Equality. Data. where we share notes about Prifina’s progress. You can also explore our Github channel.

The article was first published on Disruptive Asia.

--

--

Jouko Ahvenainen
Prifina
Writer for

Entrepreneur, investor, business executive and author - my dream and work is to create new and get it work in practice.