The Great Social Probe

nshcore
catalystNetOrg
Published in
6 min readNov 26, 2019

--

In our previous article “The Long Hard Road To Web 3.0” we left it saying that Facebook has become the biggest social experiment lab in the history of our species.

Let’s dive deeper on some of the shocking experiments Facebook have conducted on its users like the proverbial guinea pigs we are.

1) Social Influence and Political Mobilisation

In 2010, just before the midterm elections, Facebook researchers planted an “I voted” button at the top of the users’ news feeds, along with the information about their polling place. You could also see the names of your friends who had clicked the button.

The researchers then checked public voting records to see which of the subjects voted. This seems benign enough but could Facebook in the future decide which friends to share your “I voted” status with to manipulate the outcome of democratic elections?

2) The Role of Social Networks in Information Diffusion

To find out how information spreads on Facebook, researchers randomly assigned 75 million URLs a “share” or “no-share” status. The links included anything from news articles to job offers.

Those with the “no-share” status wouldn’t appear in your friends’ news feeds at all. Facebook wanted to know if by censoring user posts without permission if they could still find a way to the rest of your social network.

3) Social Influence in Social Advertising

In this study, Facebook was trying to find out if the ads work better on you if your friends endorse them. They showed the users two different types of ads; one with fake endorsements from your friend and one without and then measured how many clicks those got.

4) The Spread of Emotion via Facebook

Facebook was trying to find out if your emotional state affects your friends. They looked at one million users’ status updates, both positive and negative, and then looked at the positivity or negativity of the posts of those users’ 150 million friends.

The researchers found that the friends of the users with positive updates were suppressing their negative posts and vice versa. If you post something positive on Facebook, one out of every 100 friends (who wouldn’t have otherwise, according to the study) will do the same within 3 days.

5) Emotion Contagion

The previous un-consensual experiment led to Facebook manipulated the news feeds of almost 690,000 users, showing some of the more positive updates and others more negative ones. All to see how it affected the users’ moods.

If there was a week in January 2012 when you were only seeing dead kittens or cute babies in your feed, you might have been part of the study.

Alternatively, they were trying to get you pregnant and sell your data to Target. That’s quite gross actually. I wonder how many babies were conceived because Zuckerberg made your partner all broody during his online manipulations.

The result of this was that Facebook could manipulate our data feeds to alter emotions while using the social network.

6) Self-Censorship

Facebook tracked every entry of more than five characters that didn’t get posted within 10 minutes. They found 71 per cent of the users “self-censored,” drafting comments that they never posted.

Many others edited their posts before sending them out to the social network. And this is all from content you haven’t even explicitly sent to the social network yet.

7) Selection Effects in Online Sharing

The primary purpose of this study was to find out whether broadcasting your intention to buy something could affect your friends’ buying interests.

Facebook offered exclusive deals, like free items, to specific users. If you accepted an offer, it would either be auto-shared so all your friends could see it or you’d be given a choice in the matter.

The second group got a button they could click to choose whether they want their offers broadcasted. In this case, only 23% of users in the study had the freewill to opt-out of sharing content to the rest of your network; FB mostly un-consensually shared shopping habits with your network.

Have you ever wondered what person listens to the police “Every step you take/I’ll be watching you” and thought, “Ahhhh, I can cash in on that” Zuckerberg did, fck*n weirdo.

Seriously, this is scary!

On their own, some of these experiments look quite benign, but when you think of how Facebook could use these types of manipulations together and for the wrong reason it becomes terrifying, and they can only do this because you choose to give up your data to use their “free platform.”

So, from Target snitching to your parents about underage sex to Zuckerberg virtually anal probing hundreds of millions of users around the world, surely it can’t get worse for Web 2.0 platforms?

In 2013 former CIA/NSA security contractor Edward Snowden leaked highly classified documents about global surveillance programs, what was released was, in my opinion, one of the most damaging leaks of a generation.

The leak detailed one of the most comprehensive domestic and international espionage program ever known, something “conspiracy nuts” had been saying for years before.

The leaks described nationwide telephone tapping in co-operation with telephone companies and the tapping of internet infrastructure.

It promoted the use of weak encryption in technologies as well as paying $10 million to RSA to backdoor its encryption products.

It also demanded backdoors into some of the largest tech companies and Web 2.0 platforms such as Apple, YouTube, Yahoo and Facebook, to name a few.

Those who didn’t outrightly install direct access to servers got their data centre traffic tapped. Tools like the XKeyStore allowed intelligence analyst to search all internet traffic based on keywords.

In the case of Yahoo! Company executives installed direct access to all its mail servers for the spy program, without even informing their Chief Security Officer. This could have led to serious consequences for Yahoo! users if they still existed.

The spy program even extended to the point of intercepting mail for hardware deliveries of equipment such as laptops/computers and servers to install malware on them before them continuing their journey to customers.

An NYT report even revealed NSA, and British authorities teamed up to find out how many pigs you killed on Angry Bird, sounds like the kind of project you give an intern right.

So, the current Web that we have today is broken and unfit for purpose.

Why?

Thirty years into mass adoption of the Web, our data architectures are still based on the same paradigm it started: stand-alone computers where data is centrally stored and managed on servers of trusted institutions and sent or retrieved by a client.

The current web — with its client-server-based data infrastructure and centralised data management, has led to the creation of data monarchies, with many unique points of failure, as we can see from the recurring data breaches of online service providers.

This centralised web could also be described as being built on stateless protocols; they don’t have a native mechanism to authoritatively say who owns what, and who has permission to do what.

Stateless protocols like the web can only manage to transfer information, where sender and receiver of that information are unaware of the state of each other.

The lack of state is based on the underlying protocols that underpin the web, such as the data transmission protocol called TCP/IP, as well as subsequent protocols built on top such as HTTP for the transmission of hypertext. These protocols are used in the transmission of data but not how it is stored.

Enough with the rant. The big question is: can we fix this broken web?

Let’s leave it here for now. We’ll answer this question in our last part of this series.

I’m just getting started writing on Medium, so I’d appreciate your feedback.

If you find this reading enjoyable, please share it with your network and remember follow me @NshCore

--

--