If you’re still considering whether or not to get off of Facebook, here’s one good reason you should
Hint: it’s not because of fake news or Zuckerberg‘s inhuman tendencies
I’m writing this because popular opinion is colored with unfounded claims and easy targets leading to a mass misunderstanding of the problem of the current social media ecosystem, and also because it’s easier to write once than explain every time someone asks me why I’m no longer using Facebook (the long story isn’t quite as powerful in a short form).
While there are several problems with Facebook, here I will only go into the one which I believe is the most important, as it is deep rooted in the “DNA” of the platform and hints at the potential of it to gain influence over our lives, even more so than it has already. That DNA (not to be confused with the leadership/personality based DNA described in this well written piece) is the ad-based business model, without which Facebook would be nowhere near the size and dominance it currently possesses.
The recent controversies are only a preview of the amount of power Facebook could exercise if it chooses to. In an ideal (non-profit driven world), it won’t. But if it means manipulating users’ opinions and emotions for the sake of revenue, I don’t think any tech company has the empathy to back down. Such companies are profit-driven machines, sustainable only by their ability to serve relevant content to their users, so as to gain more and more engagement. In Facebook’s case, each like or reaction we make is a data point which Facebook uses to understand us, allowing them to provide even more relevant content in a continuous cycle of engagement and targeted advertising. This in itself is not bad. But it’s the potential of such data driven content serving that reveals the true negative implications of Facebook’s business model.
Here’s the cycle broken down:
- Users react to and like posts
- Facebook aggregates such user activity data to pinpoint his/her interests
- According to user interests, FB feeds content more likely to garner a reaction
- Facebook gains a broad understanding of the users' characteristics (interests, hobbies, political affiliation, intelligence level, sexual orientation, career field, etc etc).
- Facebook uses its incredible knowledge of its users to serve ads that blend in well with the content we are used to seeing, so they too have a very high likelihood of being clicked.
A high likelihood of being clicked means that an advertiser has the highest likelihood of making a return on said advertisement. From this it’s obvious to see why Facebook has become the world’s largest marketing company — because of its ability to pinpoint ads to relevant users so accurately, companies with a product to sell have no good reason to advertise anywhere else.
As the stockpiles of data keep growing, Facebook’s knowledge of users gets better, and ad click-rates get higher → Facebook makes more money from advertisers.
But surely, facebook is using our data responsibly, right?
This is the scariest part. For the most part, Facebook assumes an innocent countenance when it comes to the question of data usage. Controversies like that of Cambridge Analytica are passed off as exploitation by a third party, without the knowledge or intentional involvement of Facebook itself. But Facebook’s management of user data is far from innocent.
Given the amount of time the average person spends on the platform each day, there is no doubt the content we see influences the way we think. So what happens when Facebook chooses to use our data for the wrong purposes, by showing us content meant to make us feel a certain way or believe a certain narrative or ideology. Facebook can, theoretically, manipulate people into falling in line with whatever agenda that it might find most beneficial — and that, most likely, is one in which Facebook itself is not harmful to society. How do we know it’s not doing this already? Its algorithm is completely opaque. Nobody but the engineers who created it know what data is being used to define what shows up in your Newsfeed.
It’s happened before. In June 2014, an article published by Forbes describes a “massive psychological experiment” Facebook performed on its users to test how they react to emotional content in their feeds. The results showed that, when exposed to more emotional content, users were more likely to create posts which reflected those same emotions (i.e. emotional contagion). Case in point. Whether or not they have done it since or plan to in the future, Facebook has the power to manipulate the way it’s users think and feel, and this, I contend, is the most verifiable reason that Facebook is a malicious actor in today’s society and use of it should be reconsidered by all.
Nick Sukiennik is a master’s student in computer science and founder of the Inflo Project, an initiative aimed at improving the transparency and reliability of social media platforms.