Beware of Artificial Stupidity

The Risk of Misapplying Today’s AI

Thomas Euler
Digital Hills
12 min readJul 20, 2016

--

Photo Credit: Margot Wood on Flickr

The current public discourse on artificial intelligence is, mostly, divided between those who fear AI might extinguish humanity and those who hope it will save the world. Taking a more nuanced look based on the technology’s current capabilities, I argue: There is a set of real issues right now. I summarize those as Artificial Stupidity.

These days you can read a lot about artificial intelligence. While opinions on what a world with truly intelligent machines will eventually look like differ, a fair share of smart people and experts think that it’s at least a possibility AI might carry severe, if not existential, risks. As Elon Musk recently put it at Code Conference: We might end up as merely pets to super-intelligent AIs.

While all those claims don’t seem implausible to me, it’s important to mention that all of them would only materialize on a long-term timescale, if at all. That is, because all of those scenarios depend on what is called artificial general intelligence or AGI in short. The term is used to describe an AI that has eventually reached human-level intelligence (though the definitions of what that actually means vary). Once we achieve AGI, said intelligence is very likely to quickly surpass our mind’s cognitive abilities because current AI development is largely based on self-learning and self-improving algorithms. Popular terms like machine learning, deep learning or neural networks all refer to such approaches. Yet, we are still most likely decades or even centuries away from reaching AGI. Predictions vary greatly. They range from less than ten to over a hundred years. Even if the most optimistic ones would materialize, those dangers still wouldn’t be ‘right around the corner’.

That’s not to invalidate all the concerned voices: I think — and have stated so on several occasions — that it’s critically important to think about such scenarios and do everything we can to avoid those bad-to-worst-case outcomes. However, there are other, less discussed concerns regarding our increasing reliance on artificial intelligence today. I summarize them with the term Artificial Stupidity. It’s a set of issues that I believe deserves more attention.

The Artificial Stupidity Problem

My concern in brief: We increasingly cease control over several affairs and hand it to machines and algorithms that are marketed and sold to us as artificial intelligence. Given the current state of the technology, however, those ‘intelligent’ algorithms, assistants and tools are still very binary. They are mono-dimensional in their ‘thinking’ and largely directed by the underlying assumptions that were implemented by their creators. Hence, if those assumptions turn out to be wrong, we might have accumulated a lot of risk by basing our decisions on flawed, artificial ‘reasoning’ or, in short, stupidity.

Rain and Randomness

Let me illustrate this by way of example. Imagine you’re planning a brief trip to Madrid. You ask your intelligent assistant (of the Amazon Alexa or Google assistant variety) something along the lines of “what clothing should I pack for my trip?”. It answers that the weather in Madrid is going to be pretty good. Thus, it advises you to pack only shorts and t-shirts.

From a technology perspective, that’s not a minor achievement. For it to work the AI has to understand that ‘my trip’ refers to your Madrid visit — which it might, for instance, know because it scans your inbox just as Google Now already does. It than needs to make a link between your question for clothing advice and the weather forecast. Afterwards, it must connect the latter to knowledge about the clothing that fits this kind of weather. It all is, in fact, so impressive that it might be beyond the skill of at least the upcoming generation of assistants. Regardless, it’s fair to assume that we are close to assistants being capable of performing such tasks.

Still, this case has some significant potential for Artificial Stupidity (even if we leave the reliance on our known-to-be-shaky whether forecasts aside for a second!). Assume you go on said trip following your assistants advice regarding your wardrobe selection. Yet, your flight is delayed so you don’t manage to change planes in London. Since there are no further flights, you are forced to spend the evening in the capital of England. Of course, it’s a cold, rainy night. Not only do you have to find a hotel last-minute (with the help of your assistant, to be sure), you also arrive there rather soggy.

What happened? If we generalize from our example, you could call what happened bad luck. I prefer randomness. It’s the kind of randomness that happens everyday, in many circumstances and varieties. It is hard and often impossible to predict with any precision. Thus, the intelligent agents we’ll increasingly use won’t account for it in many instances. Simply because they can’t with any reliability. (For an in-depth analysis of the problems created by randomness in our prediction-based systems I recommend the works of Nassim Taleb.)

Artificial Stupidity Lesson I: We shouldn’t rely on AI in domains where randomness plays a major role. (Particularly, when the potential cost is higher than losing a dry shirt)

Decisions and Complexity

Let’s move on to another example. One of the areas that is of major interest to AI companies is decision-making in business. For obvious reasons: We have a long history of bad decisions and, in consequence, tons of wasted money and failed companies. Doesn’t it sound great to replace error prone humans with intelligent algorithms who make good decisions all the time? It certainly does to many top executives around the globe. AI in business is, hence, the domain where money is to be earned. Alas, it also might not only sound but actually be too good to be true. At least for the foreseeable future.

In case you follow the current debate around AI and the automation of work, you will have read a statement along the lines of: The first kind of tasks that are going to be fully automated are repetitive and highly standardizable. Often, those appear in conjunction with handling (large amounts of) data. The underlying assumptions are obvious and, at least partially, valid: Computers are more efficient at processing big chunks of data and less prone to the kind of errors we humans make (being careless etc.).

One industry that lends itself perfectly to such scenarios is the insurance industry. McKinsey recently claimed that the workforce of Western European insurers could potentially drop by 25% on average. So, imagine we use artificial intelligence in fraud detection. Deploying a combination of machine learning, predictive analytics and pattern recognition — ideally in real time — to identify fraudulent claims might sound like a great scenario for many in the industry. Point in case: Shift Technology, a Start-up from Paris active in that domain, just finished a $10 million Series A funding round.

But of course, detecting fraud is a highly complex domain. Many variables are involved (humans probably account for some of them unconsciously by intuition). Fraudsters change their behavior regularly. What happens, if an insurer’s AI doesn’t account for a specific variable that its developers overlooked? The company might miss a fair amount of fraudulent claims. Or worse: What if the AI over-accounts for one, framing innocent customers as fraudsters? It might cost the company customers and its reputation while it hurts innocent people in the process. (For now, the industry appears to be aware but the AI hype is gaining momentum.)

This scenario illustrates the second issue: Currently, artificial intelligence is — at least partially — still based on algorithms that initially frame the problem at hand in a certain way. If the problem-framing isn’t spot-on and, ideally, universally applicable there will likely be blind spots. Those carry risks. The more complex a problem is, the likelier it is that we didn’t frame it accurately.

Artificial Stupidity Lesson II: We shouldn’t rely on AI in complex domains as long as the algorithms are based on an incomplete understanding of the problem.

(It’s for this reason that I’m always wary when AI progress in games — impressive as it may be, e.g. Google’s AlphaGo — is used to demonstrate that we are soon going to base our real-world decisions on AI. Games have a known set of rules that clearly define its boundaries. Reality has not!)

Cats From The Past

The increasingly popular approach of machine learning (and related techniques) tries to thwart this. The discipline’s basic approach is to create self-learning algorithms. Developers no longer tell the machine how to solve a problem. Instead, the algorithm is designed in a way that it learns to approximate the best solution by testing data in an iterative manner and thereby detecting patterns. In order for it to do so, the machine needs to be fed with a lot of data.

A screenshot from Google’s Tensorflow Playground that illustrates the basic approach of machine learning algorithms.

A typical use case for machine learning is object recognition in pictures or videos. In order to detect a cat, you have to feed the algorithm thousands upon thousands of pictures (usually labeled data, though Google already has algorithms that don’t rely on this). After a huge amount of iterations, the machine learns to understand which visual features define a cat.

The advantage of this method: Instead of us trying to describe in definitive terms all the features in every possible situation — the Lesson II problem — the algorithm learns that by itself. While this is great when it comes to recognizing cats, the approach comes with its own set of problems.

Kate Crawford recently highlighted some of them in her excellent article Artificial Intelligence’s White Guy Problem. She describes how the data we feed to the learning machines determines what they conclude and how this can lead to biases and errors. For instance, Google’s photo app classified images of black people as Gorillas. She writes:

“ This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.”

While the Google incident is certainly problematic, there are other applications of machine learning and predictive analysis where such effects would be even worse. Crawford writes about predictive policing and risk assessments in criminal sentencing, areas I also find highly problematic.

In all cases the same underlying problem applies. It’s (somewhat) well-known to statisticians: Using data from the past can be problematic when trying to predict the future. Particularly in complex environments where outlier events can have a big impact. It can also be problematic when our AIs derived their ‘knowledge’ from past data yet the conclusions are suddenly no longer valid. Getting back to our fraud detection example from above: Imagine organized fraudsters come up with a new technique that was not reflected in the data on which the AI was trained. The AI likely won’t detect it. Keep in mind: As of today we need big data-sets. Thus, a machine-learning-based AI would need a big number of such cases to properly detect it.

Artificial Stupidity Lesson III: We shouldn’t rely on AI in domains where the data we taught it with can quickly be invalidated.

Artificial Stupidity is a Marketing and Application Problem

There are even more nuances to the problem. I aimed to illustrate at least the most critical ones. Having established the issue of Artificial Stupidity, let me explain the backdrop on which it is likely to occur.

First, its important to understand that Artificial Stupidity is not primarily a technology problem. The algorithms do what they do well and they get increasingly better at it. Instead, it’s a marketing and application problem.

Artificial Intelligence is currently a hot topic. It’s being covered a lot — not only in tech media but also in business and general interest media. The big tech companies are hiring a lot of talent and paying them top dollars. Even undergraduate students are sought after. A lot of money is going into AI. Reversely, there are expectations that money can be earned soon. Big Players and Startups alike are betting big on AI as a marketable product in the not-so-distant future. And there is reason to believe they are right.

While the media is looking critically at AI, the criticism is usually not geared towards the technology’s actual capabilities but the impact it might or might not have one day. Some of those concerns are likely valid, for instance the impact of automation on the job market. Yet, the ones I mention in here are largely being overlooked. If you’re not well-versed in technology and data, it’s easy to conclude that AI is ready for the real world.

Given this overall situation, it’s fair to expect increased investments in AI technology in many industries. Because AI is a highly specialized field and talent is rare, most companies will likely rely on AI as a service. Which makes sense¹ if the AI is very good at what its label claims. I fear, however, this might often only be the case at first glance. At least for the next years. In addition, some of the problems I highlighted will only be solvable once AI relies less on huge data sets from the past and develops a more holistic, human-like approach to thinking².

I’m convinced that AI will continue to improve over the next decades. However, there will be a huge interest in monetizing it before that. Thus, expect many companies to hop on the AI bandwagon soon. The adoption of AI technology by people and non-tech companies alike is going to increase only further. We already see the trend starting today.

Which leads to the application side of the problem. If you condense my argument, it reads: Artificial Stupidity is the result of the misapplication of current AI technology — which is extremely good at dealing with complicated problems — to complex domains where it is technically ill-suited.

Alas, we have an incredibly bad track record at differentiating between the two. We often mistake one for the other.

This is aggravated by the fact that it’s often very hard to understand what our AIs are actually doing. In part that’s by design: Machine learning algorithms come up with their own solutions to problems (by pattern recognition etc.) and don’t necessarily explain to us how they arrive there. Also, it’s due to the business model of AI companies: The algorithms are their product. By definition they won’t be transparent about it. And finally, convenience and a lack of knowledge are to blame: When we buy and use software, being told that it does X, we tend to believe that it actually does X. We hardly investigate how it achieves X. And even if we would, AI is way beyond the average person’s techno-literacy (™ Kevin Kelly).

A final issue: The problems I hint at are not necessarily visible straight away. They often only manifest over time. An AI that works based on flawed assumptions might do a decent job for some time. The accumulated risks might become evident only over time. Initial results, therefore, shouldn’t lead us to conclusions too quickly.

The Final Verdict

Without a doubt we are making incredible progress in AI as well as in related fields like natural language understanding or computer vision. I’m positive that we will see increasingly impressive use cases and skills mastered by AI. Yet, we should be very careful and deliberate when it comes to choosing where we apply it. It’s mandatory to have a very good understanding of the technology and its actual capabilities. Otherwise we might end up ceasing control in areas where more bad than good is done by our artificially intelligent tools.

If you enjoyed this piece, I’d appreciate you clicking that little heart below :)

¹ At least for the short-term; long-term the thereby created dependency on third-party software for making critical business decisions might create its own set of problems.

² There are indeed companies working on this. Listen for instance to this podcast with the founders of Numenta. So does the London-based Improbable. Over here, Marc Andreessen & Ben Horowitz discuss the company as well as AIs which only need small data-sets. Take their statements with a grain of salt though. After all, they are invested in Improbable. Also, as the company’s background is in gaming, my argument regarding games might apply.

About me

I work, think, write and speak about digital business, technology and decentralized systems. If you’d like to connect, follow me here on Medium, or check out my website to find out more. I’m always glad to talk & interested in inspiring discussions. My analog residence is Munich, Germany.

--

--

Thomas Euler
Digital Hills

Tokenizing fandom at Liquiditeam. We bring social tokens and NFTs to the creator economy and professional sports. www.liquidi.team | www.thomaseuler.de