AI and driving looking at the rear-view mirror.

Mario Alemi
Aug 2 · 3 min read
Image for post
Image for post

Rare events are unpredictable

In 2008, during the credit crunch crisis, we heard people in finance saying “According to our model, something like that could happen only once in a billion years.” Clearly, the models had some problems.

Finance was mainly using so-called “value-at-risk” models to estimate the probability that assets’ value could drop below a certain threshold. But, as already pointed out by the Benoit Mandelbrot in 1963 (The Variation of certain speculative prices), prices of assets are not easy to predict.

Prices are like salaries –most of them are around a certain value, but some of them are shamefully high. You go to a meeting and normally suppose that no one is earning 1000 times as much as you do.

Nonetheless, even if you are a millionaire, you know that the probability of meeting a billionaire is low, but not zero. The exact probability would be hard to compute, but you know that betting your life on that would be risky… you know that the fact that you never met a billionaire in a meeting, does not make it impossible.

Prices are the same. You never saw in the past five years the S&P Index losing 30% of its value. But you know it can happen.

Many mathematical models, on the contrary, look at the past months/few years, and because they never see something happening conclude that it cannot happen.

Artificial Intelligence

Mind that those value-at-risk models are rational agents acting so to achieve –given observations and intrinsic uncertainties– the best expected outcomes. In other words, it’s Artificial Intelligence.

Although wrong (but of course any model is not perfect), value-at-risk models were useful. They allowed financial institution all over the world to analyse huge amount of data and to produce an estimate of what could have happened.

Maybe the real problem was that because of the (relative) mathematical complexity, and the staggering amount of data analysed, (almost) everyone in finance relied blindly to value-at-risk. In 2008, the Basel Committee on Banking Supervision still accepted it for modelling prices.

All that made banks under-estimating the probability of default for all the assets in their portfolio.

Deep Learning

Enters now Deep Learning, a branch of Artificial Intelligence. Instead of the prices of assets, today’s artificial neural-networks analyse written language, or videos, or audio, most of it produced during the past 20 years.

(Note that language shares many mathematical properties with prices and wages. Indeed, the same function was re-discovered many times to model wages (Pareto), frequency of words (Zipf) and prices (Mandelbrot).)

Neural networks, like value-at-risk, cannot imagine something new, outside the current paradigm. It never happens? It cannot happen. They merely look at the rear-view mirror, and, with a (relatively) complex mathematical model, predict the future.

But while humans know that the ways of Lord are infinite, software –which learns just from the statistics of the data it crunched– does not. For it, the ways of Lord are limited to the ones it has travelled.

If an airplane lands on the highway, humans understand what’s happening even if it’s the first time they see something similar. A neural network doesn’t, unless it’s been fed with that example. And the same goes for a cow on the road, or a bridge collapsing. Would you trust a self-driving car dealing with the road in front of you falling down?

Neural networks can ape human knowledge, but they are unable to understand, much less imagine, rare events –the black swans.

On the contrary, human beings, particularly the smart ones, are able to learn from unexpected events. To embed unexpected events in their ontological model, even if they have to change such model.

Alexander Fleming, the discoverer of penicillin, came back from his holidays and found a bacteria culture plate with an open lid. He observed that such plate had fewer bacteria than the others. He decided to investigate why the bacteria did not reproduce there, understood the cause, fought (as usual) with the establishment, and many years later humanity had a powerful antibiotic.

History of science is full of such anecdotes.

History of Artificial Intelligence is not, and never will. Imagine a neural network analysing Alexander Fleming’s samples:

Found plate with open lid.

Trash plate

Goodbye penicillin.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store