# Art and AI

We have great expectations of Artificial Intelligence (AI). Those expectations are based on the ways we have used the scientific method to make sense of the world. One of the basic ideas of science is that the laws of the (material) universe can only be meaningfully understood by expressing quantified measurements. Numerical terms are needed, not just words and pictures. The belief was that instead of ordinary sentences we must use mathematical equations, algorithms, as in if a= b and b = c then a = c. That makes sense. But if we replace the letters with semantic concepts as in if cats = pets and pets = dogs then cats = dogs, the whole thing changes. The equation does not make any sense any more. Not all events or relations between observations can be reduced to a mathematical description.

Ideas can have very different meanings; the context matters.

The values of an equation at a given starting time are called the initial conditions for that system. The Newtonian, deterministic claim is that for any given algorithm, the same initial conditions will always produce an identical outcome. Life is like a film that can be run forwards or backwards in time.

A computer system is and is going to be at least for the foreseeable future, a “discrete system”. In other words, it is a “finite state” machine. This means that it must have a countable number of states it can be in. It has a finite number of digits it works with. Because computer algorithms are often used to model not only other digital, discrete systems but analog/continuously variable systems as well, methods have been developed to represent these real-world, non-discrete systems as discrete systems. One such method is sampling a continuous signal at discrete time intervals. We reduce the world to fit the models we have made and tools we have created. The method is called reductionism.

This is where the problems begin.

We have learned that no real measurement is infinitely precise. All measurements necessarily include a degree of uncertainty. The uncertainty that is always present arises from the fact that all measuring devices can record measurements only with finite precision, using a finite number of digits. Something is always left out. To be able to reach infinite precision, the instrument we use should be able to display outputs with an infinite number of digits.

By using very accurate devices, the level of physical uncertainty can often be made acceptable even for demanding practical purposes, such as space travel, but it can never be eliminated. It is important to note that the uncertainty in the outcome does not arise from randomness in the algorithms but from the lack of (infinite) accuracy in the the numbers that the equation starts with, the initial conditions.

It used to be assumed that it was theoretically possible to obtain nearly perfect predictions by getting more precise information. Better instruments and better algorithms would shrink the uncertainty in the initial conditions, leading to shrinking imprecision in results and predictions. The lack of infinite precision is still thought to be a minor problem today. AI research modeling the world or reverse engineering the human brain are both based on approximation, the belief that very small uncertainties don’t matter.

But what if they do?

Possibly the first clear explanation of a very different kind of understanding was given in the late nineteenth century by the French mathematician Henri Poincaré. He was the founder of the dynamic systems theory, one of the early disciplines leading to what are now called the sciences of complexity. The new claim was that there were systems that followed very different laws. Approximation would create false results because the tiniest imprecision in the initial conditions could escalate later in time. Two nearly indistinguishable sets of different initial conditions for the same system would then result in two developments that differed massively from one another.

Poincaré was way ahead of his time. His thoughts received support much later in 1961, when Edward Lorenz found, by accident, that computer models of the weather patterns he studied were subject to the same kind of very sensitive dependence of initial conditions.

James Gleick tells the story: one day in the winter of 1961 Lorenz wanted to examine one of his computer simulations more closely by running it again. To give the computer the information to start the calculation with, its initial conditions, he typed the numbers from an earlier printout. As he followed the results unfolding from the new run, Lorentz saw weather patterns diverging so rapidly from the patterns of the earlier run that very soon all resemblance had disappeared. But the new run should have duplicated the old one exactly. Lorenz had copied the numbers into the machine himself. There were no mistakes. The program had not changed. The problem, he later realized, was in the numbers he had typed. Lorentz had entered rounded-off numbers, assuming that the difference, one part in a thousand, was inconsequential.

The approximation, the small numerical difference was so small that it was like a puff of wind in the whole weather system, like a butterfly flapping its wings in a huge forest.

But the tiniest puff of wind can make all the difference.

This was the birth of the concept that is called the “Butterfly Effect”. Poincaré may have been ahead of his time when it comes to the scientific community but his timing was perfect when it comes to the international community of artist residing in Paris in 1907. His ideas had a profound impact on the conversations in the cafes of his hometown. The interest in Poincaré’s concepts was a major influence on many painters and poets. Among them was Pablo Picasso, with his cubist paintings.

The British scientist and novelist C. P. Snow wrote in 1959 that the intellectual life of the whole of western society was split into two camps — science and arts — and that this divide forms a major hindrance in solving the world’s problems. We need informed imagination. We need algorithms and art!

Narratives and pictures may matter more than we think. Life is a non-discrete system. Evolution is not an algorithmic process. The human brain works through associations, not through algorithms. The processes of life are contextual, continuously varying, unpredictable and complex. The philosopher E. F. Schumacher wrote poetically: “The power of the Eye of the Heart, which produces insights, is vastly superior to the power of thought, which produces opinions”

We have associated intelligence with reasoning. Perhaps, intelligence should be equally associated with creativity.