Here’s How Mark Zuckerberg Can Make Facebook Good for the World
Will Max and August Zuckerberg feel like what their father built is “good for the world” when they grow up?
Many parents encounter a moment when they look at their legacy through their children’s eyes, rather than their own. Mark Zuckerberg recently described such a shift in his point of view:
It’s important to me that when Max and August grow up that they feel like what their father built was good for the world.
According to many, including investors, former executives, researchers, U.S. agencies and other nations, Zuckerberg has some serious fixing to do. Roger McNamee, a former Zuckerberg mentor and early Facebook investor, starkly captured a prevailing sentiment:
It reads like the plot of a sci-fi novel, a technology celebrated for bringing people together, is being exploited by a hostile power to drive people apart, undermine democracy and create misery.
Mark Zuckerberg has not been oblivious to these criticisms. While some consider him guilty of an initial period of denial, he is now intensely focused on navigating the regulatory, competitive and business model implications of addressing Facebook’s current deficiencies. Wired recently chronicled his ongoing struggles as “two years of hell.”
While fixing Facebook’s current problems is a monumental task, will it be enough to qualify as “good for the world” by the time Max and August grow up? I queried several experts to explore that question.
Alan Kay, the computing pioneer, education innovator and philosopher, is pessimistic. He reminded me of Marshall McLuhan’s observation: “We shape our tools, then our tools reshape us.” To Kay, the “reshaping of us” by Facebook is going quite poorly.
Kay observed that very technology is an amplifier to human propensities and inborn drives. This is a double edge sword. It can yield beneficial changes, such as how the printing press sparked the Enlightenment and how computing revolutionized science. But, technology can also have the opposite effect.
To Kay’s astute eye, Facebook feeds human universals, such as susceptibility to stories, status and kinship, in a narcissistic way that, in turn, magnifies our parochialism and tribalism. These are the root causes of the problems and upheavals we are experiencing.
Kay noted that this is not unique to Facebook. “Most modern inventions for communication,” he wrote in an email, “have the possible side effect of allowing pre-literate modes of thought to push literacy aside to allow society to slide back into oral traditional society’s tribal thinking.”
What makes Facebook, Google, Twitter, Instagram and other massive technology platforms especially dangerous is how the power of their reach heightens their amplification power:
Providing Pleistocene brains and sensibilities with factors of a billion or more power moves from simple danger to disastrous and possible final danger.
Bran Ferren, co-chairman of Applied Minds and former president of Walt Disney Imagineering, believes Zuckerberg needs to accept responsibility for what happens on Facebook. Just as you can’t host a party of a few hundred people and absolve yourself of what happens during it, Zuckerberg can’t host two billion users on Facebook and disown responsibility for what is said and done. Accepting responsibility is necessary for Facebook to any chance of reaching some high-order, good-for-the-world potential.
Mark Pilipczuk, a technology industry marketing veteran, reinforced this point. “Whether [Zuckerberg] likes it or not, Facebook is not just a platform. It is a publisher because that’s how the users use the product.”
Both Ferren and Pilipczuk called for Zuckerberg to adopt journalistic standards, as a start. “He has to accept,” Pilipczuk told me, “that what users see on Facebook is the news, as most users aren’t trained nor take the time to check sources. That’s why proper journalism has a code of ethics. Journalists take great pains to try to get the story right.”
Facebook has long resisted this responsibility, however. While it has guidelines and employs contractors to weed out obscenity and moderate content, it wants no part of becoming an arbiter of truth. Wired recently characterized the principle that “Facebook is an open, neutral platform” that must not create or edit content as “almost like a religious tenet inside the company.”
Ferren’s response is, “Get over it!” Not only have newspapers long accepted such responsibility, he noted, but many other large online businesses, like eBay and Amazon, also take responsibility for what appears on their site.
In addition, Ferren believes that Facebook needs to take the lead in defining and enforcing civility upon its users. Echoing Alan Kay, he sees Facebook as a new medium where users don’t yet understand what the norms of social interaction should be. Right now, he dubbed it “a petri dish for festering negativity.” People can post almost anything without consequences. This wouldn’t work in the real world, why would we expect it to work on Facebook? As result, the platform empowers mean-spirited discourse.
A good standard to apply, Ferren suggested, is the same level of civility that we would want in a good, real-world restaurant. Restaurant goers can be fun, loud and boisterous, especially with friends at their own table, but a threshold level of decorum is expected. Restaurant owners and staff expect to, and are expected to, enforce appropriate civility to the benefit of all. Inciting arguments at another table or disrupting others’ experiences would not be allowed, for example.
Ferren is optimistic that this is achievable without being draconian. There are certain things we might say in jest or even anger that is okay between friends, but not appropriate between strangers, for example. Some behavior is acceptable in one culture but perhaps not in another. Facebook needs to learn how to be contextually and culturally sensitive, how to guide users, and how to call out those who are not civil. And, using AI, this could be done mostly algorithmically.
Mark Pilipczuk believes this should be done but is less optimistic about automation. Relying on just algorithms to remove uncivil behavior, offensive content and outright lies might not be enough. Humans usually find ways to skirt algorithms. He believes Facebook will need to add quality human editors — not just rely on poorly-trained, low-paid contractors. It also will need to clean up advertising before it runs — not after the money is counted. He views that as a social good, too, as it will mean creating many good jobs (which Facebook can easily afford).
Katherine Milkman, a behavioral economist and associate professor at the Wharton School, believes there is even greater opportunity: Facebook can embrace the goal of improving its user’s lives — and prioritize it before its own interests.
Milkman noted that past research has shown that what people see in their newsfeed influences their emotions. Even Facebook’s own researchers acknowledge that howtheir users use social media can affect them positively or negatively. Other studies have shown that social-network-enabled experiences can increase patient engagement, help prevent chronic diseases and boost voting turnout.
Many researchers, including Milkman and her colleagues at the Center for Health Incentives and Behavioral Economics, are trying to understand how social networks can drive positive behavior change. None can match the access, tools and resources available to Facebook, however.
Milkman’s advice to Facebook is use the best behavioral science research to develop hypotheses about how it can help users make better decisions and improve personal outcomes. It could then use rigorous scientific methods, like A/B testing, to evaluate what truly creates benefits for users and what doesn’t.
What’s more, she offered, Facebook should be transparent about such efforts. It can help the whole world can learn about how social media change behavior and in what ways. Sharing this knowledge will make it easier for all to understand the impact Facebook is having. It will generate new scientific insights. And, it will allow others to follow any successful strategies Facebook finds for improving outcomes. While such transparency might not maximize profits, it would optimize for the greater good.
Shaping Facebook to take responsibility, to enforce civility and to put others before self might begin to lay the foundation for addressing the warnings that Alan Kay raised.
Our argument for doing the inventions is that they would be the next big step after the printing press — and just like the problems and upheavals catalyzed by the press, there was a chance to minimize the damage and maximize the new kinds and ranges of reach. Our aim was to realize and put forth even stronger learning environments for the benefit of the human race.
Similarly, Mark Zuckerberg needs to own the problems that he is creating. He should embrace the responsibility and opportunity to find new, powerful ideas and “put forth even stronger learning environments” that “minimize the damage and maximize the new kinds and ranges of reach” of Facebook.
This might indeed cost Facebook in the short term (which Zuckerberg has said he is willing to bear) but it is the point of view that could create the greatest good.
“Point of view is worth 80 IQ points,” Alan Kay famously observed. But, as he recently reminded me, “the sign can be positive or negative.”
Chunka Mui writes, speaks and advises on the digital future. He is a best-selling author of four books on technology and innovation. This article is updated from one originally published at Forbes.