The human solution to Facebook’s machine-produced problems also won’t work
Facebook is doomed.
True, all companies are mortal. (Geoffrey West has been telling us how and why for years.) But Facebook is actually designed to fail in a world that stops tolerating the way Facebook works, and can’t quit working.
Alas, what will make it fail is what makes it work.
I first wrote about this way back in 2012, in After Facebook Fails. I had the timing wrong (expecting a crash in 2013), but the reasons right:
- That Facebook is based on what @EliPariser in The Filter Bubble calls “a bad theory of you”—even though the company clearly knows how to herd billions of people into its pen.
- That Facebook was building a massively complex machine for attracting, manipulating and advertising at people that was both uninterested in understanding itself and impossible for its operators to completely understand. In other words, it wasn’t designed or built to fully account for everything it does.
- Things would go badly because of #s 1 and 2.
- The right response by the market was to let giant flawed systems such as Facebook’s fail, and finally to start putting social and business engagement tools in the hands of human beings, where they should have been in the first place.
What I didn’t factor in were —
- How fully business would buy into the promise of Big Data, sold by McKinsey, IBM, Salesforce, Microsoft Dynamics, Oracle and other arms merchants to big business, utterly mindless toward inevitable collateral effects, including damage to market trust, accumulation of toxic assets, and inevitable regulatory restrictions, such as the GDPR. (I unpack this in After Peak Marketing.)
- How in the course of doing that, business would (again encouraged by arms merchants) hand way too much much budget and power to CMOs, who should have been called DTOs, since they were really Digital Targeting Officers, operating with nearly zero respect for the marketing lessons taught by Peter Drucker, Theodore Levitt, Philip Kotler, Jack Trout, Al Ries and other lighthouses on the rocky shoals of negative market sentiment, against which companies led by bad DTO navigation now feel their hulls grinding.
- How much bigger adtech and martech (and their biggest practitioners, Facebook and Google) would need to grow, fed by DTO-led spending, before their business methods — especially around data gathering — would begin to fully earn the collateral effects of #1 and #2.
- That the first large-scale responses to surveillance overreach would not come from the market (there was almost none of the investment I called for back in 2012), but from regulators, most notably with the EU’s General Data Protection Regulation (GDPR), which became enforceable in May.
- That all the internal policies and controls Facebook could put in place would not counteract the thing it does best for its advertising customers: microtargeting. When you know as much as Facebook does about everybody on it, and you add what advertisers can also know (as did Cambridge Analytica for the Trump Campaign), you can manipulate people personally, without either the person or the advertiser knowing it. (Or with enough plausible deniability to get away with it.)
In Facebook Can’t Be Fixed, John Battelle argued on 7 January 2018 that “Facebook’s fundamental problem is not foreign interference, spam bots, trolls, or fame mongers. It’s the company’s core business model, and abandoning it is not an option.” In an earlier piece titled Lost Context: How Did We End Up Here? John visits how Facebook got into that business, and went to hell after that. His subhead explains, “Facebook and Google’s advertising platforms are out of control. That used to be a good thing. Now…not so much.”
That’s an understatement. See, Facebook’s form of advertising amounts to voyeurism for hire. That was never a good thing. At a certain point having Goliath peek into personal windows creeps out too many people, including regulators.
How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?
Consider this possibility: it can’t.
One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?
In three words, the whole thing.
The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.
You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.
Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.
Switching metaphors one more time, Facebook is Humpty-Dumpty, and Humpty is already on the ground. None of King Mark’s horses (e.g. better algorithms) or men (and women, doing icky jobs) can put Humpty together again.
So we might look at what’s happening for Zuck in terms of grief stages: denial, anger, bargaining, depression and acceptance.
At first he denied that the problem was there — even as fraudulent and misleading ads ran right next to the post where he did the denying. I suppose he went through the anger stage in private. Now he’s at the bargaining stage, betting that humans with awful jobs can halt the rising tide of outrage and embarrassment.
He’s not alone.
In The Daily Beast, Jason Kint (who heads DCN, online publishing’s trade organization) unpacks Facebook and Google’s Dirty Secret: They’re Really Junk Mail Empires, explaining in his subhead that the two “are finally losing some of their power, but most people still don’t get the dubious business practices that make them most of their money.” (I’ve also been pasting the junk mail label on adtech, for many years. Good to see Jason helping with that. We need all the metaphors we can muster here.)
In How to Fix Facebook — Before It Fixes Us, Roger McNamee, an investor and old friend of Zuck’s, deeply examines What Went Wrong, and teams up with ethicist Tristan Harris to produce an eight-point prescription for making Facebook stop doing all the awful things that it does, mostly as collateral effects, and without meaning to.
And I do hope Facebook does what they suggest, because all eight of those things are good to do. But man, are they hard. Take the first suggestion, for example:
First, we must address the resistance to facts created by filter bubbles. Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.
Is Facebook even capable of saying which ad or fake news story was placed in front of any individual by any corporate advertising customer? Or knowing which ads or stories are fake or politically motivated? Or were placed directly or indirectly by Russian operatives?
And what about the 99.x% of fake shit placed by non-Russian bad actors? Take a look here:
That’s Mark Zuckerberg’s 23 November 2016 post saying (among other delusional things) “more than 99% of what people see is authentic,” as it appeared to me, in my browser. Ev Williams, in his browser, had his own pair of icky ads:
In After Peak Marketing, I looked at those and wrote,
All four ads are flat-out frauds, in up to four ways apiece:
1. All are lies (Tiger isn’t gone from Golf, Trump isn’t disqualified, Kaepernick is still with the Niners, Tom Brady is still playing), violating Truth in Advertising law.
2. They were surely not placed by ESPN and CNN. This is fraud.
3. All four of them violate copyright or trademark laws by using another company’s name or logo. (One falsely uses another’s logo. Three falsely use another company’s Web address.)
4. All four stories are bait-and-switch scams, which are also illegal. (Both of mine were actually ads for diet supplements.)
Again, this is how Facebook was designed to work.
It should help Zuck to know that all companies fail—and that they just fail faster in Silicon Valley.
Google has the same problem, by the way, but is more aware of it, more diversified, founded on far better intentions (e.g. that nice stuff about gathering and sharing all the world’s knowledge) and therefore more likely to survive, at least for awhile.
It also helps to remember that all companies have souls born of founding purposes. And there’s a helluva big difference between a search engine meant to find “all the world’s knowledge” and one meant to find hot girls on a college campus.
Now let’s go deeper.
Because what matters far more than Facebook and Google is that we live digital lives now, on a network that puts us all a functional distance apart of zero. (When we’re connected, that is. The distance apart when we’re not is infinite).
This is new to human experience.
What we know about digital life so far is largely contained within what we’ve retrieved from the analog ones that preceded it. To wit,
- Google, Facebook, Apple and Amazon might all deal in digital goods, but their structures and operating methods mostly improve on the ones modeled by Carnegie, Ford and J.P. Morgan.
- YouTube and Netflix are TV 3.x (where over-the-air is 1.x and Cable is 2.x).
- BuzzFeed, Verge and Vox are all print magazines in digital drag.
- Podcasts are shattered remnants of radio.
- The Web is networked Gutenberg.
- Search engines are library card catalogs.
- AI systems just automate decisions based on how shit gets remembered. In fact, one could make the case that the whole freaking Internet is about helping humanity know and remember shit.
Marshall McLuhan says all technologies are extensions of ourselves. Hammers, pens, binoculars, cars and computers all give us ways to do what we can’t do with our brains and bodies alone. The list of bullets above are just rudiments of bigger, better and other things surely to come.
It helps to recognize that we are still going through early stages in our new Digital Age. Everything we know about digital life, so far, is contained within prototypes such as Facebook’s and Google’s. And all of those prototypes are just projects. If you doubt that, look at your computer and your phone. Both are either new or to some degree already obsolete. Hell, even the new ones are old. Nothing will feel older a year from now than today’s latest Samsung and Apple mobile thingies.
It isn’t turtles all the way down, it’s scaffolding.
So let’s at least try to look below what big companies, Trump and other dancing figures in the digital world are doing. What is the floor they’re dancing on? And what is the ground under that floor?
At the very least, that ground is new and unlike anything that precedes it in human experience. Nothing matters more than at least trying to understand it.
And I’m not saying I do. Pointing at rocks doesn’t make one a geologist. But it’s at least clear to me that we need to understand what made Facebook possible. That will be a lot more helpful than than blaming Zuck for the biggest broken egg ever laid by tech.
Original version published at doc.blog on January 5, 2018. Latest edit (for tightness): 5 March 2019.