Reining In Social Media
Concerns grow about the platforms that dominate online speech
Social media has struck many nerves. Once an online extension of dating sites, friendly chats, database searches, and public relations campaigns, it has matured into a creature with far-reaching tentacles.
Our homes, our workplaces, and our society are now connected online in unprecedented–and some say, dangerous — ways. MIT IDE Director, Sinan Aral, has written extensively about the huge impact of social media in his book, The Hype Machine. In a recent interview he noted that “we’re witnessing, in real time, society grappling with the emergence of social media as a very powerful force.”
As the past few years have shown, social media platforms are not simple tools of engagement, they are huge business empires influencing our ideas, our actions, and our politics. At the April 22 MIT Social Media Summit hosted by Aral and the MIT IDE, these trends will be dissected and explored by business, government, and academic experts. (See the full agenda here.)
MIT IDE content manger, Paula Klein, caught up with Yaël Eisenstat, one of many distinguished panelists speaking at the Summit, for a preview of her views. Yaël discussed how to align technology with democratic principles in an era of pervasive social media. She has a long career at the intersection of tech, democracy, and policy and “strives to bridge the divide between government and tech, to help foster a healthier information ecosystem.” She is currently a Future of Democracy Fellow at the Berggruen Institute, and Researcher-In-Residence, Betalab. She has served as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, and the head of a global risk firm.
Here is a brief summary of her conversation with MIT IDE.
Q: Let’s start with free speech since that is the panel you will be joining at the Summit. What is the state of speech freedom in 2021 America?
A: Let’s not forget the basics: The first amendment applies to what the government can and cannot do; it doesn’t apply to what private companies, including social media companies, can do with speech. However, if you look at Facebook, the biggest player, it has essentially taken over a major part of the modern version of the public square, but they are not required to be good stewards of that public space. Is that what we, as a democracy, want? Should our public debate be run by private interests, without any guardrails or accountability? I don’t think so.
While content moderation — what to leave up or take down — is extremely important, the bigger issue to me is the tools the platform companies are using to decide what to do with the speech.
Q: Do you mean recommendation engines? What are the dangers?
A: I am concerned about what companies like Facebook, Twitter, and YouTube are doing with the speech taking place on their sites, not just the fact that it’s flowing through their pipes. They are not neutral conduits. Their recommendation engines and algorithms are deciding what speech to amplify, and what to give less attention to. Their algorithms and recommendation engines steer us into groups and conversations — factual or not. They’re selling tools to advertisers — which can include political operatives — that target us with certain types of speech. And their leaders make intentional, political decisions on which policies to enforce when, and whose voices are important for their business needs.
Facebook and other platforms are highly profitable businesses that, under our current system, have a fiduciary responsibility to maximize profits, not to protect democracy.
If we want a healthier information ecosystem that lets democracy thrive, we, as a society, have to decide what it will look like. That doesn’t mean we all have to agree, or to stop debates and disagreement. We need public spaces where debates can take place about what we want for our society, but governments need to protect those spaces. Guardrails are necessary.
Q: What can the public do about this?
A: I believe at this point, especially following the insurrection at the U.S. Capitol, that much of the American public understands, even if at a basic level, that something is not right with how social media is operating and influencing their democracy. Many people in other countries understood this long ago.
As the public is demanding more and more that government step up and do something, there’s lots of intentional noise right now to make people think that if they’re not a coder or engineer they can’t take part in the conversation; that technologists know best. But as citizens, we expect government to step in to protect us from dangerous business practices and exploitation. The public does not need to have the technical solutions, but it does have to signal to Congress that this should be a priority.
Q: What do lawmakers need to do next?
A: There is no legislative silver bullet that will suddenly fix the issues we are speaking about. There are a number of areas that require serious attention and thoughtful legislation and regulation, including Section 230 [of the Communications Decency Act], anti-trust, and data privacy.
When it comes to Section 230, the issue as I see it is not merely about free speech, but about the tools that companies create and use to decide what to do with that speech.
In my recent Harvard Business Review article, How to Hold Social Media Accountable for Undermining Democracy, I wrote: “I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people toward it, and how their tools are used to target people with it.”
Section 230 needs to be updated to reflect today’s environment, not scrapped all together. This includes really looking at who should benefit from the immunities granted by Section 230 and who we are considering neutral intermediaries. We also really need to look at whether it is being overly broadly interpreted in the courts today to throw out cases that are not at all about free speech.
One of the most important debates of our time is around who should govern the internet.
I have long advocated that governments must define responsibility for the harms caused by these business models, but should not be in the business of regulating actual speech.
The techno-utopian ideal that the internet is the great equalizing, democratic force has not turned out to be true, at least as it is currently run. The power over the majority of online speech is concentrated in a few hands, and that also has to change, which will likely be addressed through anti-monopoly or anti-trust actions.
It is the government’s job to protect citizens, especially the most vulnerable. There will undoubtedly be tradeoffs with any decisions on how to regulate some of these companies, and it needs to be handled thoughtfully. But we can’t continue with the status quo when so much is at stake.