Pax Facebook

Welcome to 2030. After decades of turmoil, politics are sane again. We no longer dislike the bankers — the financial industry does what it needs to do for a reasonable price. Anxiety levels are down, happiness levels are up. The anger that was so common in the late teens and early twenties has gone. Pax Facebook is working. How did we get here? A quick recap.

After the democrats nominated Elizabeth Warren to take on Trump in the 2020 election, many people hoped that Mark Zuckerberg would run as an independent candidate because of her left-wing populist stance. Mark didn’t. Warren went on to win that election on a platform of soaking the rich, lowering the retirement age to 60, universal healthcare and the nationalization of critical industries, declaring that the era of market capitalism was over.
 
While the spreading of fake news and conspiracy theories on Facebook arguably tipped the 2016 election, in 2020 ad targeting played a pivotal role. Facebook advanced Artificial Intelligence made it now possible to tweak any political message to maximum effect. Dubbed by some as Political Lying as a Service, algorithms now changed the tone and the content of the message to not just suit the personal profile of the viewer, but also their current emotional state.
 
It is not just that Facebook has detailed personality profiles on each of their 3 billion users. They have the history of those profiles too. Their AIs have been trained on those history and by now can reliably predict how you’ll react to a piece of news or a funny cat video. Facebook knows what sort of day you’ll have before you do.
 
Zuckerberg decided he could no longer allow political operators access to such a powerful tool. In fact, he decided that nobody but Facebook should have access. In a short press conference he told the astonished tech press that Facebook would be shutting down their ad system effective immediately. They were working on another stream of income.
 
Wall Street decried the move as an act of financial suicide. But there was little to be done. Facebook’s founder controlled the majority of the voting rights and Facebook had enough cash at hand to survive for years even without any new income streams. The stock price took a beating, but soon stabilized when it became clear that insiders hung onto their shares.
 
If you reread Zuckerberg’s essay from 2017 titled “Building Global Community” it seems like what happened next had been planned for years: “To change Facebook from a social media company to a platform providing the social infrastructure for a world community”
 
The first thing Facebook absorbed into its new platform was the financial industry. Combine the knowledge of what everybody thinks, wants and plans with the world’s best machine learning researchers and you have the making of an unbeatable hedge fund. That’s not quite what Facebook did though.
 
The Facebook Fund was part a mutual fund, part a hedge fund and part a peer-to-peer funding market. Open to anybody, its goal was to provide funding to whatever company needed it most, whether publicly or privately held, providing investment opportunities to whomever had some money to spare according to their exact level of appetite for risk. 
 
In short, do what the capital markets are supposed to do, but now powered by Facebook’s AI and their knowledge of the world. As Zuckerberg put it, the key was not to try to beat the market, the key was to make the market better for all involved. Taking a reasonable slice of these improvements turned out to be enough to keep Facebook going.
 
Shutting down access to the ad system was only the first step towards fixing society. Just like social media crises before — the fake news in 2016, the AI accounts in 2018 — the ads weren’t the real problem in 2020. The economy was doing ok, unemployment was low, but people were angry. Angry people look for simple solutions and someone to blame. Populists provide those.
 
Why were people so angry? For many reasons, but in Facebook’s internal analysis of these causes, one thing stood out: Facebook.
 
Facebook shows only a fraction of what your friends post. What you see and what you don’t is determined by a piece of software called the news feed algorithm. Historically, this complex computer program takes in any number of variables for any given post to predict whether you will interact with it. Posts that are predicted to attract more interaction show up more often.
 
This had been working well for Facebook. The more posts you see on Facebook you want to interact with, the more you interact with Facebook and the more you want to come back. Not only that, it trains its users to post things that invite interactions. Some would even argue it hooks into the same part of the brain that slot machines do. You post something and watch the likes and reposts stream in. Or not, so you throw in another coin.
 
The news feed algorithm had made Facebook great, but as it turned out, hadn’t brought happiness to Facebook’s users. Unhappy users, unhappy citizens leading to an unhappy society which in turn was bad for the company. Something had to change: what to optimize for.
 
Nobody outside of the company knows of course how exactly the news feed algorithm works. But from leaks and official statements we know that it optimizes for three things: it still takes into account how likely you are to interact with a post, or as the company likes to call it, the relevance. But it now also models the psychological impact a post will have and optimizes for user happiness. Finally it considers the impact on society at large.
 
The rough mechanics of these aren’t hard to understand. Show a picture of a funny cat next to a story about a politician and the politician becomes a bit more liked — Facebook had been experimenting with mood manipulation experiments since at least 2012. The advanced AI and the personality modelling developed in the decades since has made happiness regulation as straightforward as controlling the temperature in a room.
 
It has helped a lot. The political rifts of the late teens and early twenties in the democratic societies have mostly healed. Faith in experts and institutions has returned. A sense of optimism has taken hold. Support for populism has also dropped, but hasn’t lead to a return to party politics as usual. The role of politics and government has changed, too.
We no longer expect politicians to come up with visions for the future and give our votes to the ones that promise the most; we now look for capable managers who can best run the country on a day to day level. The big ideas come from think tanks, academia and online discussions. Or we copy them from other countries.
 
If these ideas about how to improve society are contradictory, as they often are, we expect the leadership to conduct a carefully designed experiment in the real world to find out which one works best, present the results in clear terms, pick a winner and move on. This works in science and business, so why not when running a country?
 
Some accuse the company of abusing the system. Facebook is manipulating our emotions for financial gain and political control, they say. Tech companies should focus on providing evenhanded technology based on neutral algorithms. They should stay away from making moral calls about what a just society constitutes, let alone manipulating society into being a better host for what is essentially a rent seeking monopoly.
 
The standard refutation of this concern is that there is no such thing as a neutral algorithm. When Google — now the search division of Facebook — changes the way they rank websites, this has moral side effects. The goal might be to bring up more relevant and correct answers to factual questions, but not all questions are factual and the change impacts all queries.
 
You can ask your car “when is it acceptable to steal?” and the Google assistant will answer by referring to some famous philosopher. “According to John Stuart Mill…” That seems to avoid taking a moral position on the issue. But it doesn’t really. It just changes what type of moral judgement is made. The algorithm still has to pick a philosopher.
 
Google could return the answer that is true to most people. Or the answer that most likely seems true to you. Or the answer that has the most support among experts. But none of these approaches is neutral. Going with what is true to most people reinforces existing prejudices. Giving everybody their own truth leads to radicalisation. Relying on experts isn’t really an answer either. The experts aren’t always right and besides, you still have to pick the experts.
 
Similarly, any algorithm that determines what shows up in your news feed and in which order has a political bias built in. The algorithm will optimize for something, whether it is for the number of interactions you have, your happiness or civilized debate in larger society. All of these have political consequences, there is no neutral.
 
Paraphrasing the Whole Earth Catalog, Mark is rumored to have said: “We are already playing God, we might as well try to be good at it.” And so we live in a Facebook world, a world that works. The bull and bear markets of Wall Street are now a thing of the past. We invest our money in the Facebook ETFs that return a predictable yearly result. Political debate takes place on Facebook and is thoughtful and considerate. But most of all, we’re happier than ever. And no longer so angry.