Roberto Reale
Aug 3 · 2 min read

RERUM COGNOSCERE CAUSAS, knowing the causes of things, has been the motto of the London School of Economics and Political Science since 1922. Here, following the best tradition of Anglo-Saxon rationalism, great care is taken to distinguish between causal links and mere statistical correlations. In other words, if during summer both the GDP and the energy consumption for cooling happen to increase, we cannot simply conclude that it is air conditioners which boost the economy!

The precise status of causal relations, however, is not free from ambiguity, especially in the social sciences. Indeed, whilst it is comparatively easy to say that if a body moves in an accelerated motion it is because a force is applied to that body, it is much more difficult to establish that if wages rise then employment decreases, as predicted by the neoclassical theories in labour economics. Indeed, there is plenty of evidence in the scientific literature that both prove and invalidate this thesis.

And even when only exact sciences are taken into account, a theory remains the object of an endless quest for perfection, apart from being always in danger of being proved false by the facts. Nicholas of Cusa used to speculate of an interminable approximation to the truth; George Box is credited with the witty aphorism that all models must be actually wrong.

Chris Anderson, in an article on Wired, then asks whether the advent of algorithms capable of extracting hidden regularity from data does not make the causal explanations superfluous after all. In other words, whether the general availability of greater and greater amounts of data and of increasingly sophisticated algorithmic tools is going to push us towards a new cognitive paradigm.

The only objection that comes to my mind, in this scenario, is that data by itself is of little or no value without a preliminary (human) preparation for automatic processing and without the appropriate (again, human) choice of the algorithmic tools. And those are actions that already imply a hypothesis, a theory to be validated or falsified by the data themselves. Yet, are we really justified in denying the birth of a data-driven reasoning, guided by pure regularity in raw data?

In a book freshly printed, Novacene, James Lovelock attacks rationalism in favour of intuition. Is it just an attempt to rob Harari of his role as maître à penser of the global tech elite? Maybe it is. Or maybe all this new focus on complexity that self-organises into knowledge will disclose us a deeper understanding of how our own intuition works. And will teach us how to build intuition into machines. Artificial intuition.

Eventual Consistency

A blog by Roberto Reale

Roberto Reale

Written by

E-gov consultant. Freelance journalist, digital revolution, culture, politics. Open source developer. Civic tech evangelist.

Eventual Consistency

A blog by Roberto Reale

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade