Are Social Media Stocks Toxic Assets?
--
By Susan Ozawa Perez, PhD
As Twitter is facing well-deserved scrutiny over its recent decision to continue to allow Liz Harrington to tweet the words of Donald Trump even though he has been banned from the platform for continual violations of the company’s content standards, socially responsible shareholders are petitioning not only these social media companies for proper adjudication but for the federal government, to in fact, fill the void and ensure disinformation and incitement of violence is not allowed to flourish in the absence of common sense regulation.
From matters of propagated disinformation on public health; to hateful, incendiary speech that incites violence; to unmitigated illicit activity and abuse of children — social media’s content moderation problem continues to pose egregious and existential risks to our collective health and safety, the rule of law and democracy, itself.
Many socially responsible firms and asset managers do not hold social media companies in their portfolios. This includes Facebook, Twitter and Alphabet, the parent company of YouTube. Some hold them and use their shareholding rights to advocate for corporate responsibility through engagement and filing shareholder resolutions.
The problems associated with these companies begin in the regulatory vacuum in which they operate, with insufficient coverage under FCC, FEC and Anti-trust law and enforcement due, in part, to their status as internet platforms (versus, news organizations or media companies) and, in part, due their sheer power and massive litigation and lobbying capacities.
As shareholders, we do not advise companies on ordinary business matters, we can only raise public policy concerns with companies we invest in and identify risks to the companies urging the Boards’ greater attention. We can also raise regulatory, litigation and reputation risks to companies caused by their content moderation failures, which pose unmanageable and egregious public nuisance.
But as stakeholders we can all push for common sense regulation.
We believe the federal government must now set the terms for social media companies and we agree with many in law enforcement that compliance to existing law is primary and subordination to the letter and spirit of emerging international law in Europe is the only course of action acceptable to avoid continue loss of life due to disinformation on the efficacy and safety of COVID-19 vaccines and loss in faith in our democratic structures though pernicious promotion of the lie that the presidential election involved widespread fraud and was stolen.
After the insurrection on January 6th, a group of shareholders wrote to these social media companies demanding their Boards revisit their fiduciary duty and do everything in their power to remove incendiary speech and disinformation related to the election and freeze their promotional algorithms. Coalitions of shareholders of Alphabet, Twitter and Facebook, last year, also asked these companies to consider implementing a suite of solutions to these problems.
The first upstream recommendation was creating a robust trackable authenticating system to register and identify users. Not only is identification a common practice from public libraries and swimming pools, to grocery store rewards program registration, it is also central to be able to cross-reference user lists with registered sex offender lists, for example, and to prosecute crimes done in broad daylight on these platforms. This standard practice can also address foreign disinformation campaigns and domestic ones as well. One revelation regarding the anti-vax materials circulated online, stymieing re-openings and our collective health and economic prospects, is the finding by the nonprofit Center for Countering Digital Hate (CCDH) that 65% of vaccine disinformation comes from 12 social media accounts.
A suite of shareholder recommendations involve greater transparency measures. Investors called for independent audits of content moderation polices and practices, which includes access to data, greater transparency of the moderation process and an appeals framework adjudicated by panel of human and civil rights experts. We have pushed for migrating monitoring and enforcement of community standards in-house, while exponentially increasing the budget to these divisions.
A number of recommendations involve common sense, pro-active ways to protect minors and vulnerable populations, like identifying pernicious actors by creating typologies of traffickers, spreaders of misinformation and other toxic content, and by developing independently audited protocols for monitoring users with similar data usage patterns.
Finally, recognizing the business models of these companies are misaligned with the public interest, shareholders have called for, not only harm reduction in use of algorithms that have promoted content and groups trafficking illegal, violent, hateful, inflammatory or pernicious content or activities, but transitioning revenue generation to a user-pays model or other models to prepare for global data standard harmonization along the lines of the General Data Protection Regulation framework.
Stakeholders asked these companies to partner with non-profits, human and civil rights and privacy experts, law-enforcement and legislators, including UNICEF, Change the Terms, Color of Change, and ACCO (The Alliance to Counter Crime Online), WeProtect Global Alliance, ECPAT, and we are now advocating our federal legislators, take leadership in these partnerships to quickly craft comprehensive legislation regulating social media companies.
As I said at Facebook’s shareholder meeting in 2020:
“[P]owerful political forces have used the platform to consolidate power with hateful and exclusionary rhetoric, damaging the social fabric of society, provoking violence and unrest and up-ending close elections.”
Toxic Social Waste, Me, Digital, 2020.
Lax content moderation, “gives politicians free rein to libel and slander others, undermine our elections, push conspiracy theories and promote xenophobic propaganda while the company profits.”
And while titans of these social media platforms like Jack Dorsey and Mark Zuckerberg are now charged with deciding the fate of entire ethnic groups, political election results and whether we will ever properly emerge from this pandemic, we believe the job lies in federal regulation imposing a robust hate speech law, extending and making more robust the FCC laws regulating media companies to cover social media companies and forcing social media companies to change their business models and prohibit use of algorithms to promote content.
Once common sense regulation of social media is law, socially responsible investors can finally invest in social media companies without having to engage these companies on these issues year after year. We have every reason to believe these companies can creatively continue to be profitable if they are all on a level playing field, set by federal regulators.