The Algorithmic Power that shapes our lives

How much are we aware of being unaware of what our life on a daily basis unfolds?

“A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” Mark Zuckerberg

Let’s consider for instance subliminal stimuli. Wikipedia tells us that:

they are any sensory stimuli below an individual’s threshold for conscious perception

and

that they activate specific regions of the brain despite participants being unaware.
Visual stimuli may be quickly flashed before an individual can process them, or flashed and then masked, thereby interrupting the processing. Audio stimuli may be played below audible volumes or masked by other stimuli.

Here are some corporate examples of subliminal stimuli.

This creates a misrepresentation of the external reality and therefore alters our decision-making process, hence our behaviours.

We then now need to contextualise the subliminal stimuli into the hectic, hyper-connected society we currently live in.

Mobile is taking over and we live surrounded by screens. Just ten years ago it was impossible to imagine that seeking, comparing and eventually booking a holiday package via mobile device was even possible.

Planned obsolescence is the non-mantra that pushes us to buy a new 600 quid smart-thingy every 6–8 months as the case is now lighter than it used to be and they have expanded the memory or the screen is 25% higher performing.

To say something is “old” means you’ve bought it six months ago.

Technology has extended the sense of what is possibile in a way and at a pace that our average brain cannot cope with.

We then rush to catch-up technology and this brings stark unbalances in our life. This translates into a world designed for early-adopters. And this impacts everybody across every level of our society. And it is extremely dangerous as it inevitably impacts the political sphere.

Innovation is a political act. Change can be good and can be bad. Technological innovation as a tool can serve good and bad causes. It is up to the humans who leverage it to decide which needs to satisfy.

Technology is bringing about a higher level of complexity. Let’s consider for a second the Net Neutrality— which, to a certain extent, subliminally, is at the core of this post. What if we survey passers-by on the high street asking what NN is about? Would you expect a higher rate of correct answers? Not so sure about the UK — the country I live in, but when it comes to Italy, where I was born, I can tell you that a quite substantial proportion of MPs know nothing about Net Neutrality. Without grasping its essence it is quite difficult to come up with a political view on such a delicate and multifaceted matter which entails rights and freedoms. Provided with a bit of context, the main question is who decides what to show and how and when.

Provided with a bit of context, the main question is who decides what to show and how and when.

Sometimes I wonder whether we are properly aware that in a hyper-connected society as this one, our behaviours are shaped by algorithms.

So when we talk about Algorithm we refer to:

a formula for solving a problem.

Those who refine the highest performing formula of the most popular online platforms and define what the goals are of a performing algorithm acquire a political nature. In this hyper-connected society, those who manage the few most populated platforms can decide through their honed algorithms what is relevant and what is not. And with such scarcity of time available, as we rush from a “like” to another, our daily life is dramatically impacted by the results that these algorithms show us.

I decided to write this post after reading this brilliant piece by Zeynep Tufekci and also a couple of tweets by Salvatore Iaconesi, especially this one.

Zeynep argues that despite sharing the same contacts on Twitter and Facebook, after being overwhelmed by tweets about what was going on in Ferguson, the night of the 13 August she checked her Facebook — populated by her same Twitter contacts — and realised that there was no trace of the live drama in the Missouri town.

The following morning: (emphasis mine)

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now — seems to have bubbled them up, probably as people engaged them more.
But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.
Would Ferguson be buried in algorithmic censorship?

But what does it mean to bubble up in this context?

Eli Parisier in his famous TED talk dated 2011 — followed up by a book, argues that these personalised algorithms, designed to provide us with the most relevant information we seek, tend to produce a final negative outcome: we end up surrounded by content and people which/who tend to be aligned with our vision of the world. In this way we (un)consciously build up safe bubbles which lack conflict and diversity, narrowing down our outlook of the world.

And this moves us very quickly toward a world in which the Internet is showing us what it thinks we want to see, but not necessarily what we need to see. (2011)

I personally feel the need to perceive proximity to conflict, to be surrounded not only by like-minded ones and to be challenged as the more I doubt the more I can keep my Siddarthian quest for Sense going. I have become over the last decade a huge rugby fan and I can extrapolate many leadership lessons out of this extremely physical as well as a display of integrity game. One of these is in fact about conflict. Most of the times one team manages to score points out of relentless conflict, fair physical friction. Translating this into a sane public and private debate is the ability to listen opinions which differ from ours and elaborate further. There is no truth but there are many perspectives on the same matter. Encouraging conflict means also empathy and respectfully argued dissent. Friction. There is no progress without it.

Furthermore this scenario affects as well the concept of modern and open leadership, at least the one I’ve been working on fore a while (I will be writing about it soon) . This continuous spiral of consensus and conflict-avoidance leads to a life surrounded by yes-men/women. Within this context there is no room for improvement, and to the vital challenges that widen up our horizons.

The ability to active listening dissent and to be surrounded by smart associates needs to be enforced.

Having delved into filter bubbles we can now answer Zeynep’s pending questions : what occurred to her News Feed was the so called algorithmic filtering. And with such context, preserving net neutrality while feeding a proper debate seems extremely vital.

I have been thinking about writing this piece for the last 6 days. I actually wrote half of it awaking up one night at 1 a.m. as some thoughts were buzzing in my head and I couldn’t get to sleep. During my research, the Twitter algorithm gently brought to my attention this thought-provoking piece by Peter Olsthoorn — which by the way I have already mentioned a couple of lines above. Digging up further I have found out that he actually wrote a book about this very topic: you can download the free pdf edition here or you can buy the conveniently cheap one.

Peter argues that the filter bubbles as described by Pariser kill serendipity,

as deprives us more and more of encounters with unexpected sites, opinions and people. The online life becomes predictable indeed.

And what a boring one a predictable life is?!

However, Peter challenges Pariser:

We have been filtering for a long time already. I see this narrowing of information not as a problem, but as a logical reaction. With the advent of the internet, the number of available sources has increased enormously, as well as the speed of news.

What it really strikes me about Peter’s article is the focus on how algorithms are today shaped by marketing profiling needs. The more accurate and personalised the profile, the more expensive the marketing advertising for the online platforms giants’ clients. Our behaviours — to a certain extent — are therefore shaped by marketing profiling needs. This sounds odd, but living in hectic urban spaces we are forced to rush from a train to an office from one “like” to one retweet.

This experiment confirms my observation in a quite brutal way:

By liking everything, I turned Facebook into a place where there was nothing I liked. To be honest, I really didn’t like it. I didn’t like what I had done.

As for many of us, because of work, fun or loneliness social media platforms are mirroring and sometime replacing our offline human interactions, we should stop for a second and reflect on why we do what we do and with who.

Peter’s book is focused on Facebook algorithm which up to 2012 was called EdgeRank. As he explained back in 2012:

(emphasis mine)

This machine behind the screen constantly evaluates your expressions and behavior en masse using numerous psychological, sociological, anthropological, and who knows what other variables from different sciences.
There is only one company that owns all this information about your preferences in relation to your friends combined with your real names and pictures; furthermore, this information is held alongside the equivalent information and preferences of another billion people. Just think, one billion people with a Facebook passport.
Only China and India have more identities registered in one governed space. Here arises the first virtual empire, unless Facebook keeps on making costly mistakes….

It definitely becomes a big deal. And this is one of the points that pushed me to write this in the first place: we are granting a huge amount of sensitive data to the owners, gatekeepers of a handful of densely populated online platforms. Are we aware of the consequences of all this?

but also:

Are we OK in knowing that Google, Facebook, Twitter, Apple and the likes know exactly what is going to be our next “like”, what type of shoes we are going to buy, which political view we support? Are we fully aware of all the trade-offs that this choice entails?

As public events with sensitive political implications erupt, we follow what the gatekeepers decide to show us, without even realising we are not entitled to decide the content we want to see.

In a dystopian way, I would say that our society is becoming algorithm-centred.

But I am a fan of dissent, that is why on Twitter I follow those who I tend to disagree with, contrarians like Evgeny Morozov. I do not always agree with his provocative statements but I surely cannot afford not to listen to his contrarian views and not to think thoroughly about them.

The problem with these algorithms is the secrecy. In a society in which for instance, to name a few, Tesla and another taxi startup — which by the way I do not personally fancy, as it is riding a “revolutionary” wave being a Google trojan horse — are opening up their platforms, the ex EdgeRank, the Google PageRank and others remain secret. It is fine as we are talking about private businesses but at the same time, taking in consideration that we are talking about more than one billion people, there are some social implications that somehow, someday, need to be addressed.

If we have a look at the following list we realised how much data we are providing in order to have access to the platform:

The Basics from profile information: * Age, country, region, city; *Family background, family names; *Education; * Religion; * Occupation; * Hobbies; * Books, papers, magazines read; * Music preferences; * TV program preferences; * Political opinions; * Online spending; * Brand and product interests; * Languages you understand; * Sexual preferences; * Travel and holidays;
Aggregated social data: * Your status; * Knowledge and skills. * Home and car class; * Estimated income and wealth; * Social position and that of friends; * Living and family situation; * Use of PC, gadgets, software etc.; * When you are at home; * Places where you are out; * What times you are online and with what intentions; * Good and bad days; * When you’re horny.
Also to consider: * Pace of life; * Diseases and disorders; * Concentration cycle throughout the day; * Whether you are persistent or give up quickly; * Work behavior and effort; * Private browsing during work hours; * Preferences between text, photo and video; * Proceedings of contact; * Intensity of contact; * Attention to different relationships / friends; * Frequency and likelihood of new relationships; * Approach to individuals and businesses; * Choice of words and attitude; * Secret desires and fantasies; * Creativity; * Logical thinking; * Non-conscious and irrational behaviors; * Rational choices and weight given to these; * Emotions in experiences and exposures; * Behavior in different emotions; * Degree of happiness.

As my North-American friends might say, this is a hell of a data!

All of these big corporations possess today a gigantic collective knowledge.

As Facebook already disclosed they are running behavioural science experiments with or without our consent. And their data are more reliable and accurate that those that belong to universities or research centres.

Facebook could build all sorts of indicators for collective behavior to make economic and therefore also political predictions.

As you see the more we dig the more complicated the situation unveils.

In another experiment Luke O’Neill testing Facebook algorithm: hiding, or essentially un-liking, everything that was coming into his feed.

Soon I was left with a news feed comprised only of things I had posted myself.

This experiment shows how the algorithm acts as a parental guidance: it tells us what it thinks we would like to know and removes what according to his parameters — our likes, comments and contents — would not make us happy, hence the sad news like the ones coming from Ferguson disappear. But what if we want to know?

(emphasis mine)

The reason for the split (Facebook/Twitter), as Digiday and others have pointed out, may be that Facebook’s algorithm is specifically designed to show you feel-good stories — ones that you’re more likely to share. Violence and strife are bad for virality. When the Washington Post’s Tim Herrera cataloged every post served to him in his Facebook feed, he discovered that he was shown only 29 percent of the total posts made by people in his network. Facebook hid plenty of stuff Herrera wasn’t interested in, but also plenty of stuff he thought he cared about — posts from two of his hometown newspapers as well as one of his favorite blogs (or one that “I thought was among my favorite blogs,” Facebook’s algorithm having placed a seed of doubt in Herrera’s mind).

Then Luke O’Neill confirms my perception that these formulas act on our subconscious sphere. This is fine if we are aware of it and can freely decide to agree feeding it or ignore it. But there currently is no such debate.

The ways that we tailor our Facebook feeds are more subtle and subconscious than the ways we curate our Twitter streams — they’re based on what we like, and what the people that we like themselves like, and what Facebook thinks we’ll like before we know ourselves. It helps make Facebook a frustratingly slow source for breaking news. Twitter becomes the friend who lays it all out on the line, and Facebook is the one who holds back so as not to hurt your feelings.

The final outcome of the experiment is quite sad considering that this logic rules billions of interactions of more than one billion people worldwide daily.

By telling Facebook that I didn’t want to see anything it was showing me, I thought it might try showing me something different, to understand me better. Instead it kept telling me what it thinks everybody wants to hear: good news. I’m not seeing much about Ferguson in my feed today. I’ve also noticed that a few of my friends are celebrating birthdays today. I think I’m going to go wish them a happy one. It’s what Facebook would want me to do.

I personally became annoyed since my Android smartphone is controlling my movements, advising not to run late to a meeting and showing the address of my flat — as HOME — as the first result when look for directions on Google Maps. For this very reason reading yesterday this article published on The Atlantic quite reasonated with my feeling trapped in a Google Devices world.

Google Now, is the name of the application that technically tracks us “in order to provide us with the right information at the right time.” The article is quite insightful when it analyses the self-deception as “a positive belief about the self that persists despite specific evidence to the contrary.

Google device world seems like a nightmare for an Italian that lived for most of his life in a Berlusconi’s assets world. Berlusconi was owning and still owns all sort of businesses, football team, TV channels, insurance, banks, newspapers and lottery companies. It was a nightmare. It still is but in a very mitigated way as his political appeal, though still alive, is — finger crossed, he’s almost 80 years old! — declining. The same feeling I am starting to experience with Google, which reminds me the legendary cyclist Eddy Merckx whose nickname was “The Cannibal.” He wanted to win all the races and unlike another great like Indurain for instance, left almost nothing to his competitors. Google has spent so far at least $23 billion buying 145 companies. If we also include the diverse set of other projects currently running, its business ranges from thermostats, to Mobile Operative System, Augmented Reality Glass, Driverless Cars and many more.

We love and enjoy the services of our favourite brands. Apple greatness has been the design of a consistent delightful user-experience around an ecosystem of devices.
Twitter is providing with news before they actually become news.
Google is helping answering all of our questions almost immediately.
Most of them have been able to build cosy environments, ecosystems that made us feel safe as well as emotionally attached…
But there is a dark side in all of this. And it is represented by the power which derived by the amount of information, the so-called user-generated content — we grant them. We feed them with a wealth of personal data that enable them to sell us things we show appreciation for. It is a circle, you decide whether it is vicious or not.

The very same brands nowadays have outrageous amount of cash and actually think that banks are not necessary anymore.

https://twitter.com/ValaAfshar/status/498541730011295744

Let me get back to my favourite one, Twitter, which is currently undergoing a process that I call “facebookisation” which I find very annoying. And I am not the only one.

Basically through different experiments it is trying to look like Facebook and I believe this could harm it on the long haul.

All of these powerful platforms when an event becomes global, with clear political implications, acquire a strategic power in orienting the public debate. Nowadays news happen first on Twitter, than in real life, than on the traditional media outlet websites.

By deciding what can be tweeted and what not the owner of the platform incurs in political implication and the risk to be blamed of censorship.

I am talking about the decision Twitter CEO took to suspend those accounts publishing the video of the brutal beheading of James Foley and contextually removing all the related tweets. It is a very delicate matter and there are most likely more questions than answers. But this is the right moment to question ourselves, our institutions and trying to put at the core of the quest for Sense, the human experience.

The interests of the biggest Silicon Valley corporations, through the aid of technology, cannot shape our lives. We need to aim for a neutral environment that preserves our ability to make decisions, especially strategic ones, without biases. Marketing profiling through technology cannot represent the Sense we are looking for. As this is the Openness Era, technology should serve to gap social inequalities empowering each individual, no matter her age, the continent she is from, her gender and so on.

We need algorithms, but human (driven) ones.

In Zeynep’s own words:

(empahis mine)

It’s a clear example of why “saving the Internet”, as it often phrased, is not an abstract issue of concern only to nerds, Silicon Valley bosses, and few NGOs. It’s why “algorithmic filtering” is not a vague concern.
It’s a clear example why net neutrality is a human rights issue; a free speech issue; and an issue of the voiceless being heard, on their own terms.
Show your support

Clapping shows how much you appreciated Francesco Carollo’s story.