Kicking Facebook, Part 2

It wasn’t always this way… how and why Facebook became a dystopian dopamine engine.

Scott Matter
7 min readJan 13, 2018

I joined Facebook nearly 11 years ago, in January 2007. The fact that people I knew were on Facebook was probably what initially drew me to the platform. This was the era of Friendster (where I had a profile) and MySpace (where I did not). Connecting and interacting online was getting easier and more common.

In those heady days of Web 2.0, it was totally natural to add a bunch of personal trivia to my profile — bands I liked, movies I’d watched, sports I loved, you name it. Why wouldn’t I do that? It was like having a personal website, a blog, a messaging platform, and a photo sharing service all wrapped into one simple package. It was free, and I didn’t need to write a line of code to do it. Facebook made it easy to build a digital identity for my friends to see, and I had no idea how the company might use that information beyond displaying it to the people I chose to connect with.

A portrait of the author as a young man, circa 2006.

This was the key to Facebook going viral — it was a service that made it absurdly easy to get online and see what other people were doing (online). In a way, we’ve all got a bit of voyeur-exhibitionist in us, and Facebook lowered the barriers to entry to almost nothing — all the way down to 1) access to an internet-connected device, 2) the age of 13, and 3) a valid email address.

Facebook also made it easy to find people I knew, by mining my email contact lists, and to meet new friends, by recommending friends of friends that I should connect with whether I knew them offline or not. I remember some of the original profile questions being about my relationship status (“it’s complicated”, anyone?), and what kind of relationships I was looking for. Looking back now, FB was one pivot away from becoming an online dating service.

I was a bit naive about security back then — I’m probably still woefully underinformed — and my profile was fully public at first. My one privacy measure was to make my profile name “Scott M” so that it would be harder for my students to find and connect with me. (I said I was naive, didn’t I?) Probably pretty early on, I changed my privacy settings to “friends of friends” and — I think — then down to “friends-only” for practically everything. I still managed to accumulate 400+ friends, and some of them are actual friends and family members.

I still love seeing people’s photos, especially now that old friends are reproducing and creating adorable little versions of themselves. But over time, my FB timeline changed to be more about news and politics. That suits me, I’m, let’s say, politically-involved.

Most of the news and politics in my Facebook feed are the digital equivalent of preaching to the choir. Of course they are — we’re more likely to spend time engaging with things we agree with, except when we spend time arguing ourselves blue in the face (or the fingers) with people we disagree with.

The fact that most of what I see on Facebook comes from inside my own filter bubble probably says a lot about me — it might tell me that I spend the most time interacting with things I agree with. Ironically, in my case, the things I agree with are sometimes news articles railing against all the things I intensely disagree with in the world.

Facebook doesn’t care if you love or hate something, as long as you spend time engaging with it. Whether you’re rage-ranting with some [insert politically-opposite position here] idiot or loving it up with you besties, if you’re spending time, you’re going to be given more of it to spend time with. That’s by design.

(In light of an announcement today that Facebook is changing its newsfeed to prioritise things like “meaningful interactions between people,” I think this argument still holds. Facebook is essentially content agnostic — it does not really care what you’re interacting with others about, as long as you’re interacting on Facebook, and they will drive those behaviours with design and technology as much as possible. This change is similar to tobacco companies adding filters or selling “light” or menthol cigarettes. It’s nice marketing, and seems to be for the good of the users, but just as tobacco companies still bank on nicotine addiction, Facebook will continue to use variable reward and other techniques to exploit human psychology for its own benefit.)

How did we get here? — a speculative history of the moral corruption of Facebook

When I first joined Facebook 11 years ago, it was fun. It provided a whole bunch of useful services for free, and it was ridiculously easy to use. In terms of technology, that’s kind of the Holy Grail for the people who use it.

But free for users does not mean free for Facebook to provide — building software, storing data, hosting services all costs money. Sooner or later, Facebook as a company had to make choices about how to make money. The choices the company made thoroughly corrupted it and turned the platform into something quite different from what it was.

Early on, FB attracted plenty of venture capital to keep the lights on and the perks flowing. Of course, investors don’t put their money in for shits and giggles, or out of the goodness of their hearts (even if they’re sometimes called “angels”). Eventually, investors want a return, either as a share of profit or by selling their stake (or a portion of it) for real money, preferably (a lot) more than they put in.

That sort of arrangement drives a company to seek growth, so that its shares will be more valuable in the future. If they don’t, there’s a good chance they’ll be sued by investors, and the company’s founders or executives black-balled and banished from startup-land.

(It apparently took Facebook about 5 years to become profitable, and about 8 years to make its IPO and become a publicly-traded company.)

For a company like Facebook, which provides a platform for people to interact with each other online, there are a couple of options to make the business profitable.

One option is to make (some or all) users pay for the service. The obvious downside here is that fewer people will use a paid service than a free service, either because they don’t have the money to pay for it or because there are free alternatives available. That limits growth pretty quickly, unless the company can find a way to become impossible to live without.

The other option is to find another way to generate revenue, so that users don’t have to pay directly with their own money. The obvious choice here is digital advertising.

Digital advertising has infested the internet from pretty early on, and a lot of companies — especially media publishers like the news company I work for — have chosen to use advertising revenue to support their operations. Those early decisions to tap into advertising money have had a huge impact on the internet and on a lot of the businesses that use it.

It’s not a huge stretch to say that reliance on digital advertising has been one of the main causes of distress in the news industry over the past few years, and it’s a problem a lot of media companies are desperately trying to fix. Print media, for example continues to struggle with how (and why) to measure their audiences as a result of the advertising industry’s conventions and demands…. But I digress.

Facebook’s decision to choose advertiser money over user money has probably been the single most corrupting factor in its history. It has had huge implications for how the company has tried (and succeeded, massively) to grow, and it is likely the source of a lot of the unethical, exploitative crap that the company seems to do.

It’s no secret that even though there are nearly 2 billion active users on Facebook, the real customers are companies and advertising agencies who want to get their ads in front of us. The service Facebook provides is a platform for connecting individuals with one another, and connecting advertisers to those billions of individuals.

The product Facebook sells is human attention. The main way for Facebook to grow as a business is to capture more human attention, and to make that attention worth more so it can be sold for higher prices.

Capturing more human attention on a social media platform requires at least one of two things: getting more people to join and use the platform, and/or getting users to spend more and more of their time on the platform.

With absolutely zero insider information, I can confidently assume that every decision Facebook makes about what new features to include — from push notifications and email alerts, to the replacement of user walls with a scrolling news feed, to Facebook memories, to games, to groups and buy/sell marketplaces, to check-ins and recommended things to do and see — are measured in terms of how much additional attention they capture for the company to sell on to paying customers.

Whatever benefits we get from Facebook as users — social interaction with people we know, like, and love (or hate), pictures of far away family and friends, news and information — the real purpose of Facebook is not to provide those benefits to us, but to use those benefits to steal more of our attention and sell it on to the highest bidder. Every bit of Facebook that any of us use is tainted by the company’s hunger for more attention.

This is part 2 of a 4-part series.

Read Part 3: How Facebook drives engagement, captures attention, and collects data, and what they do with it all.

Read the whole shebang, in one, nearly 5,000 word essay: Kicking Facebook

--

--

Scott Matter

Anthropologist (PhD, McGill 2011), strategic + service designer, small axe. Fascinated by complexity, collaboration, and change.