Delayed future

Cogito, ergo sum — “I think, therefore I am.” In his 1637 work, “Discourse on the Method,” French philosopher and mathematician René Descartes simply and elegantly established the relation between the ability to “think” and the belief in an absolute, true, and unquestionable “existence.” The act of thinking differentiates humans from all other living creatures on the planet — we are able to abstract, deduce, reason, extrapolate, doubt, question, plan, project.

Since Ancient Greece, thinking machines — humanoids or not — are found in myths and legends. Homer’s “Iliad” speaks of robots made by the Greek god of craftsmen, Hephaestus (or Vulcan, according to Roman mythology), while Chinese legends of the same period mention machines endowed with intelligence. Throughout its history, Humanity has always pondered the possibility of transferring to inanimate creatures the ability to think (and thus to “exist”). One of the most famous examples is the 1818 work of English writer Mary Shelley about a scientist — named Victor Frankenstein — who gives life to (and rejects) a monstrous creature made from inert parts.

It was during last century’s tumultuous forties that Alan Turing, a central figure in Computer History, mathematically established the fundamental concepts for the development of modern computers and the field of Artificial Intelligence. Turing proved that a binary system — consisting of just two symbols, such as “0” and “1” — could be manipulated by a machine and make deductions previously restricted to the human brain.

About sixty years after the origination of the field during a conference at Dartmouth College, New Hampshire, the world witnessed the burgeoning outburst of machine learning techniques in products and services such as image recognition, voice recognition, translation, financial analysis, information search and electronic fraud prevention. But early in the development of the AI field, the pioneers themselves underestimated the complexity of the problems to be solved, generating expectations that did not live up to reality in the short term.

During the 1970's — in a world deeply impacted by serious economic problems — and in the late 1980's, with generic equipment replacing machines specifically designed to perform AI-related tasks, very few people paid attention to what was happening in the area. It was the so-called “AI winter,” in which an up-and-coming field delayed honoring its promises, causing investors (including governments) to lose their patience and withdraw the funds necessary for research and development.

Since the last decade of the 20th century and the beginning of the new millennium, the situation has gradually changed — and with it, the perception of the true potential of intelligent computational techniques. One of the research fields that has benefited the most from advances in processing speed, storage capacity and analysis of large data sets is precisely Artificial Intelligence and its subcategories. Computers began to win chess, “Go,” and “Jeopardy!” matches by facing the games’ respective human champions. It became clear to the general public that at last the area was ready to fulfill the expectations that had been around since its inception: creating business opportunities and relevant changes in the world. Next week we’ll talk about these changes, led by the field of machine learning. See you then.