The fatal flaws of Facebook: dissected

How to fix the features that broke the web

Corduroy Bologna
The Startup
6 min readApr 28, 2019

--

Caspar David Friedrich. Two Men Contemplating the Moon (Zwei Männer betrachten den Mond), ca 1830

It all started with a feeling: Why am I scrolling this endless feed… what is my purpose here… questions that in this social media dominated world are the 21st century equivalent of an existential crisis of the kind one might read about in a work of Sartre.

As I started to delve deeper, this initial unease at Facebook’s core feature (and what, as I would learn, was also the most valuable marketing tool in history) evolved into almost an obsession. As I spent more time reading about it and analyzing it, I discovered the keys to it’s effectiveness and what would lead to the eventual scrutiny and loss of trust which are plaguing the company to this day.

I’ve dissected those features and have come to an understanding of both what made them so effective, and what has made them susceptible to exploitation, both on the part of the company itself, as well as malicious third-party actors. With an understanding of the inner workings of the world’s dominant social medium and content sharing platform, I have also set out against the status quo by coming up with alternatives to each of those fatal flaws (or shining lights?) to create a framework for what would hypothetically be an anti-Facebook — an ethically driven social media platform which is immune to exploitation in the way Facebook allows.

Here are those findings.

Facebook’s Fatal Flaw #1: The recommendation algorithm.

The first step to understanding Facebook is to gain an understanding of the field of machine learning (a form of artificial intelligence which has become all the rage in the tech-world throughout the past couple of years). We know how it works from the perspective of a user, but how do they do it? How do they keep providing us with content that keeps us coming back for more?

How does it work?

Nobody really knows how Facebook’s newsfeed algorithm works, because it is entirely opaque. But we can guess at how they do it by comparing it with methods of similar results, the available body of research in recommendation systems, and the patent filings of the corporate entity.

In layperson’s terms, Facebook’s a compiles data of what you “like” and based on what that content is associated with (what topics, themes, companies, products, prominent figures), it applies a set of generalized tags to your profile. These tags can then be used to present similar types of content to people with similar tags. The more data, the more detailed such tags can become, going so far as to predict your age, political affiliation, religion, sexual orientation, social standing (income bracket), and even intelligence level (IQ). It’s no wonder advertisers like Facebook so much. Their advertising tool makes it easy to select the tags of your desired customers, so the people who see your ads have the highest chance of buying your product. So far, so good (thanks to Facebook’s COO, Sheryl Sandberg, who was part of the team which applied the same system to Google’s ad-offering several years earlier, and was hired to duplicate its success at Facebook — who would’ve thought it would go so well?).

But where did it all go awry?

When people figured out that Facebook’s intricate system of user profiling could be used to sell — not a product — but an agenda. That’s when. And we all know the outcome of that.

Fast forward to today, and national governments are shutting down the platform in sensitive moments for fear that they could provoke mass rebellion or uprising. And that’s just the start of it.

Facebook’s Fatal Flaw #2: The like-driven commenting system.

What is a social network without user-interaction? Facebook’s commenting system offers a fractal-like thread of comments, comment-replies, reply-replies and likes on each of them. The icing on the cake is the burst of adrenaline we get when we see a lump of notifications showing the list of people who have liked something we’ve written.

But just for a moment, let’s take ego out of the equation. What are people really expressing when they like your comment? If you could, would you reject someone’s like? What if you knew it was because that person had an extreme view which your comment seemed to agree with, and that comment was then going to be representative of a belief which was then used to justify extremist action, even if your original comment was not driven by an extreme belief at all, but just a rational observation?

Okay, let’s put it simpler. People on social media do not shy away from taking sides. We see it on Reddit, Twitter, and of course Facebook, where a viral post expresses a sided opinion, and the most popular two comments underneath it perfectly represent the extreme view of each of those sides. The problem is that what makes those comments viral is not they information they contain, but the belief they represent. So by clicking “like” on such comment, a person is expression their agreement with the side that comment represents, rather than rationally assessing the logical or factual validity of the comment itself. It is in this way that Facebook’s commenting system drives an ecosystem of toxic virality, where the sensationalism of a piece of content is more important than its accuracy. If people can express their one sided views through comments or likes thereof, their social-media mission is complete — to the detriment of everyone else on the platform. Facebook has done nothing to moderate the type of content that can go viral (based on liking and commenting), and as such, people continue to act in the ways which are rewarded — posting sensationalized content, and polarized comments (because those are what get the most likes). Considering that such is natural human behavior, it has thus gotten out of hand.

What’s the alternative?

To resolve the problems that Facebook’s feature dynamics have brought about at the large scale (e.g. mass paranoia, political manipulation, undermining of democracies, and so on), we have to start at the small-scale — feature adjustments to create a platform that values insights and accuracy rather than sensationalism and polarization.

  1. A non-algorithmic newsfeed — What if you could simply view everything that was posted by one of your friends by recency (which you can)? You may feel overwhelmed with the amount of content you need to scroll through too get fully up to date (especially on on the juicy stuff that the algorithm pushes to the top). Is there an in-between (not algorithmically controlled, not fully chronological)? Facebook would probably like you to think not, as that would obliterate its business model. But it is certainly possible, and arguably even easier. It would entail labelling content based on topics and themes. Rather than labeling you and feeding you content accordingly, label the content and allow you to curate your own intake based on your interests and mood at a given time. I’ve even co-written a research paper about it.
  2. A non-like-driven commenting system — Rather than rewarding a user for popularity (which, as we have learned is not synonymous with factual accuracy or logic, and often counter to it), a platform can reward users for constructive contribution to the community. Rather than a liking system, a platform can allow comments to be “endorsed”, essentially putting some of the onus of that comment not only on the person who wrote it, but on those who choose to associate their name with it. One can also determine the extent to which a comment is biased or impartial, and associate a neutrality score to the user who has posted it. Finally, the informational content of the comment can be assessed by encouraging citations, which can then be cross-checked to verify accuracy.

Despite all that, I still won’t tell you to delete your account (although I would, if I were you), but instead encourage considering more deeply the consequences of your and others’ actions on the platform and what motives they are playing towards. Whether unwittingly or not, we are all part of the new digital information ecosystem, which is broken. And it’s not going to change any way other than individual human behavior (because regulation as yet certainly isn’t having much of an effect).

--

--

Corduroy Bologna
The Startup

No war but class war. (I don’t paywall my garbage content and you shouldn’t either)