Caïn (by Henri Vidal, Jardin des Tuileries, Paris)

The inherent problem with technology…

Enrique Dans
Enrique Dans
Published in
5 min readNov 20, 2016

--

Technology defines contexts. The context we live in is, from all points of view, affected and conditioned in a fundamental way by the technology that surrounds us, and has been from the beginning: the development of the technology that allowed us to control fire modified the context in which people lived at that time and powerfully affected civilization.

The fact of being able to manage something that until that time was considered simply a natural phenomenon, which arose spontaneously and without any control — supposedly by some kind of divine intervention used to explain the inexplicable — made an infinite number of important and beneficial uses possible, among them cooking food, which radically affected living conditions.

At the same time, there were those who saw the possibility of using technology as an instrument for evil, to take advantage of it, to gain personal benefits or to impose themselves on others. Throughout history, both uses have coexisted, although some were progressively subjected to control: in modern societies, the use of fire is controlled and the law restricts those who misuse fire.

It has taken a long time to reach this social consensus, years in which society was internalizing the use of this technology, trying to understand its possibilities, during which time this technology was applied to new uses, to new ways of making money, and also, of course, bringing an end to some existing technologies. From being restricted to the tribe’s shaman, fire was simplified to the point where it can now be obtained simply by clicking a lighter. Along the way, society imposed greater controls on its use based on the idea of the greatest good.

In one way or another, all technologies undergo a similar process, sometimes taking longer, sometimes more quickly, depending on the importance of technology, its impact, and the consensus that generates the acceptance of those rules.

A technology invented to allow university students to hook up, hang out, or make new friends on campus has morphed into vast platforms that allow billions of people around the world to communicate with each other and inform themselves, and all this in just a few short years: until very recently, the social networks were considered the preserve of teenagers and their use banned in many professional environments.

Technology has advanced to incredible extremes, but the internalization of its use and its possibilities at the level of social consensus is still far from maturity. We live in a world in which, at the same time, there are people who have no presence on the social networks and know nothing about them, along with others for whom they are completely dispensable, as well as others who accept their influence over what they do, along with a broad continuum that see one or the other as cave dwellers or irrevocably alienated.

There is much about the social networks that is positive. Their ability to democratize publishing tools has changed our world, in some cases facilitating the overthrow of dictatorial regimes that controlled the media — what happened subsequently in those countries is another matter. At the same time, the global media map has been dramatically redefined.

But as such uses have developed, others have emerged. As the power of the traditional media has waned, some have taken advantage of the growth of the social networks to spread false news for economic and or political purposes.

If we interrupt this piece to mourn the premature death, at the age of thirty-two, of Facebook creator Mark Zuckerberg due to cardiovascular complications, I’m sure most readers will know what I mean. Devouring kilos and kilos of live dolphin meat has its consequences.

The fact that the entry in which Zuckerberg announced measures to combat disinformation and the circulation of false news on Facebook appeared, for many users, alongside a fake piece of news announcing the death of Hugh Hefner to try to sell products for erectile dysfunction, shows just how serious the problem now is.

The BuzzFeed study showing that there was far more fake news on Facebook about Hillary Clinton during the recent US presidential campaign than there was genuine news is not surprising: we have seen similar things in elections in other countries, and unless or until measures are taken to prevent it, we will see more and more of them see them.

The steady erosion of reliable news sources — partly due to the failure of conventional media to adapt and partly due to its own fall from grace for having sold out to interests of all kinds — has been accompanied by the emergence of a new medium, the social networks, which are supposedly neutral, with an ambiguous system of values, amorphous and vaguely defined, offering no guarantees whatsoever and, in the absence of outside controls, tells users what they wan to hear, however absurd.

Fighting the spread of false news, and not satire, humor of whatever color or freedom of expression, is necessary. As long as a significant percentage of the population sees Facebook as a platform where anything can be said and where eliminating anything is synonymous with censorship, it will be very difficult to do anything that has widespread support. What is required is to combine peer-review, reporting, social metrics, etc., with technology such as machine learning to recognize patterns of viral diffusion, checking against databases, natural language processing, etc., and, at least for the moment, supervision by humans.

This will not be easy, and we know perfectly well that those who exploit technology’s many weaknesses have enough incentives to keep ahead of the game.

But we must do something, because we are faced with the misuse of technology to undermine society, to corrupt democracy and to manipulate us for reasons that are rarely lawful.

If social networks become the way we learn about what is going on in the world, they will need to be equipped them with similar, or better, control mechanisms to the old ones. That said, the press was never able to prevent the creation and dissemination of false news, but it was usually confined to the tabloids.

The solution is to label news with clear indications as to their credibility, punishing publications with low ratings and preventing their mass dissemination, or creating systems of karma that make clear the nature of what is being posted on in our walls.

As things stand, false news not only goes unpunished, but is rewarded, attracting more followers, more likes and more virality. Changing that depends not only on the social networks and their managers: it depends on everyone.

In the final analysis, the social networks will have to continue feeding trash to those who want to consume it, but at least they will have some obligation to flag it up as trash.

The white supremacist ad that appeared on Twitter or the false news on Facebook are simply symptoms of a wider problem: the appearance of people willing to take advantage of the weaknesses of communication channels still in their infancy and whose defense systems are still not fully developed. The time has come to being developing them.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)