Four Human Strengths and AI Weaknesses

Doomsday scenarios about artificial intelligence (AI) replacing workers across multiple industries are becoming very popular these days. For those who are curious about AI but don’t necessarily have a deep understanding, it’s easy to understand the fears: terms like big data, artificial intelligence, and machine learning are often thrown about in the media in the same context as mass automation, job replacement, and tech revolution. Here at EruditeAI, we want to provide some reassurance that human beings still have a place in the workforce.

We strongly believe that AI has an important role in augmenting humans, especially when humans and machines pair up, they can achieve more than either one separately. Many people remember or have heard about Deep Blue, the artificial intelligence system that beat chess grandmaster Gary Kasparov at chess in 1997. What most people don’t know is that after that match, Kasparov came up with the idea of Advanced Chess; which became an early example of humans being strengthened and enhanced by AI: This is called Intelligence Augmentation (IA).

Advanced Chess is an example of humans and machines each bringing their unique strengths to the table to solve a problem. AI excels in a lot of domains, but here at EruditeAI, we are focused on areas where humans can be supported by the technology. For our first article, we’re opening a discussion on some areas that AI still struggles with, to help ease the anxiety around the AI Doomsday Singularity.

Today, most AI techniques revolve around mathematical optimization of a cost function, the brain does none of that — we are more prone to mimicking and re-using the safe and sound approach that has been proven to work which has both its disadvantage and benefits.

Humans are the master of conversational skills

Understanding human language has been one of the main areas of AI development since computers were first theorized. There have been many ups and downs along the way, but many researchers can agree that natural language is complex, and that AI still has a long way to go to fully understand human language.

In the past few decades, significant improvements have been made to some of the large subdomains of AI in language: Natural Language Processing (NLP), and Natural Language Generation (NLG). For example, in translation, with the promise of real-time language translation using earplugs, we may not be too far from the ubiquitous universal translator seen in Star Trek. Likewise, in writing news articles with minimal assistance, and even generating whole movie scripts, AI has been shown solving complicated Natural Language Problems.

However, one aspect that needs significant improvement is conversation. Although AI systems can be programmed to understand concepts expressed in natural language, it takes time before they develop a cohesive and coherent language used to communicate with humans. The average chatbot today requires a lot of what we call “hard-coding”, where engineers try to pre-program the bot to look for particular words or phrases, and reply with pre-selected responses that may be filled in with relevant details. While our ability to hard-code may become more sophisticated (e.g. a travel agent chatbot may be able to understand that “vacation” and “holiday” mean the same thing without a human having to tell it), chatbots are still not smart enough to do much beyond what the humans who programmed them taught them. The Amazon Alexa competition is seeking to give $2.5 million to the developers of an AI capable of having a coherent and engaging twenty minute conversation with a human. This is a key example of a big player acknowledging the problem with conversation and inviting researchers to earnestly attempt to tackle it. In the meantime, it’s clear that in terms of real, deep conversation and communication, humans are still the undisputed champions.

Humans can draw conclusions from low amounts of data

AI systems excel at processing large amounts of data. Cognitive science demonstrates that humans can process 7 ∓ 2 items in their working memory, but when trying to find trends in billions of data points, humans require mathematical tools — AI systems being a prime example. An article recently demonstrated that AI techniques could be applied to selecting startup companies for a Venture Capitalist (VC) portfolio. The result would have been the second-best-performing fund of all time: an AI outperformed VCs in decision-making.

When it comes to learning from low amounts of data, AI systems consistently underperform compared to human operators. Many of the prominent AI techniques currently rely on having a lot of data. In a new environment, or with a small dataset to learn from, it’s easy for AI systems to get confused or tricked, but humans can adapt more easily — using generalisation and their prior life experience as a reference point. New research on One-Shot Learning or meta-learning may prove to be powerful tools in the future to enable more effective AI systems with less data.

As an example, a machine learning classifier may look at pictures to distinguish cats from dogs, and after seeing hundreds, or ideally thousands of examples of cats and dogs, it learns about the parts of the photo that differentiate cats and dogs. That being said, this same algorithm may have a hard time distinguishing between lions and wolves, despite the fact that they look similar to cats and dogs. This is mostly due to the fact that the classifier is learning to look for very specific features that indicate whether or not the image was either a cat or a dog, and it may not find those features in pictures of lions and wolves. Until modern AI systems develop greater capacities for Transfer Learning, humans are very unlikely to be replaced in situations with low availability of data.

Humans have creativity and imagination that is not always supported by data

AI models have been great at understanding complex systems and making predictions or classifications of new situations given lots of training data, but they struggle with interpreting the implications of their decisions (which is called look-ahead). The machine learning models used in AI are incapable of understanding things that are not within the world of their input data, and as such, they have a hard time making decisions that take into account the big picture.

An example of a company going into a market with lots of uncertainty is Death Wish coffee; they had a hunch that a segment of the market would enjoy very strong coffee despite the health risk. Using their assumptions and branding, they were very successful in sales. In such a scenario, AI might have helped them sell more of their current product to their current customers, but may not have been able to target a new segment for which it had no way of evaluating the potential for success. Humans handle such scenarios much better comparatively, although with a high risk of failure.

Despite the progress that researchers are making in helping AI systems make more complex decisions, it will take time before AI can fill the role of humans that are tasked with something different than what they were hired for. AI could not be used to anticipate future outcomes in a situation with a high level of uncertainty — which for humans, requires creativity and imaginative thinking.

Humans have general intelligence — AI has narrow intelligence

AI systems tend to have a hard time readjusting their purpose to account for a changing situation. Technology companies do this all the time, they call it “pivoting”. Given that AI is not adept at generalizing, it makes sense that they would have a hard time reorienting themselves to solve an entirely different problem. Humans have been doing this since we learned to control fire, so we can feel quite confident in our ability to work through this. AI is improving in this respect: Reinforcement Learning is a key area where research is helping AI learn to solve more complex problems by integrating information derived from external factors influencing the solution.

For example, a new suntan lotion is being launched in a store next week. Even though there is a large amount of relevant data on customer demand, a giant storm is expected to pass on the day of the product launch. The AI model wasn’t able to take this into account because it doesn’t have data from weather services, so it could fail miserably. However, humans can integrate this new variable into the prediction more easily to improve the decision-making process. Humans are able to take in data from a seemingly infinite number of sources and can integrate new sources of unstructured data into their predictions with ease. AI systems, with their lack of understanding of many real world phenomena, are unable to project the impact of their decisions outside of the scope of their understanding. It is not currently technically feasible to feed all the possible information related or unrelated to a particular problem to a machine learning model. Judgment, common sense, and understanding are irreplaceable human traits as far as we know.


In conclusion, AI cannot automate jobs that involve:

  • Conversational skills
  • Making decisions without thousands of data points as a reference
  • Making decisions while acknowledging their effect on the world
  • Making decisions that require a general understanding of the world

Most jobs involve a combination of these four skills, and as such, we feel humans still have a place. At EruditeAI, we always think about the potential weaknesses of AI models. We are excited to be developing new AI solutions that support a hybrid approach, humans and AI working together. Our next step is to remodel current workflows to take advantage of Human-Centered Machine Learning, such that we can vastly increase productivity in the workplace; this way we can empower humans to reap the benefits of AI tools for the betterment of humankind.