Starting Somewhere

Our society, the internet and Big Tech’s responsibilities

T.H. Linamen
The Startup
5 min readFeb 20, 2021

--

Big Tech’s shortcomings are a cornerstone of today’s media coverage. However, the meta-conversation too often lumps the behemoths together. It focuses on the character of individual companies instead of the problems they collectively embody. Their size and influence uncovers issues that must be addressed systemically, not as shortcomings of any one company. Societal ills such as misinformation, hate speech and conspiracy theories are symptoms. The unprecedented concentration of power in the hands of tech firms is the illness.

I have not the depth nor expertise to opine on the privacy and competition dimensions of the current technology landscape. I do, however, work in Trust and Safety for a FAANG firm and can confidently state these companies dominate their respective sectors and wield tremendous influence in society. That much is self-evident. I can also affirm that it is extraordinarily difficult to solve toxicity, extremism and online harm while attempting to reconcile the principles of liberty and safety.

The problems may be difficult, but there is work to be done inside tech’s ivory tower and beyond its walls. Two primary challenges stand out in this realm: algorithmic distribution and content moderation. The industry is responsible for controlling the effects created by its platforms, but there is burden-sharing to be done by government and society. Let’s briefly examine each topic in the broadest of terms.

Algorithmic Distribution

Algorithms are mystified in popular culture as uncontrollable artificial intelligence run amok. In reality, humans write (and understand) the equations that govern content delivery on these platforms. Surely they are complex. They are not, however, beyond the reasonable control of their parent organizations.

Algorithms are optimized for a given parameter. Roughly speaking, today’s social media delivery mechanisms are finely tuned to yield maximum return on investment for the inputs. It may come in the form of engagement, monetization or lead generation, but no matter what they are incredibly efficient profit generating machines.

Herein lies the problem. Algorithmic distribution of content drives product usage and the accompanying ad spend, but it has difficulty differentiating between positive and negative engagement. Profit will be inseparable from amplifying toxicity until models learn to distinguish between these two flavors. To be fair, social media has made strides in this domain in recent years, but it hasn’t been enough. One look at the information environment around the US 2020 election speaks volumes.

We cannot simply wait for the technology to evolve; there is too much at stake. Instead, tech giants must admit—in the grossest of terms—they are choosing between profit and responsibility. Fortunately for the Facebooks and Twitters of the industry they are enormously profitable. They must decide that social responsibility is more important to the long-term success of their firms than next quarter’s earnings target. This is a tall ask, and I’m not optimistic.

If they do make the right decision, there are two broad levers they can pull to improve the information environment. First, the firms must be more aggressive in down-ranking content that is identified as harmful. This applies to both ads and organic posts, but much more can be done to limit the spread of bad content on the platform once it has been adjudicated. Much of the content cannot be classified as harmful before the damage is done, but that which is must be aggressively disincentivized in the reward calculation of distribution algorithms. Second, tech firms must go even further in their transparency efforts. They must communicate the tradeoffs they are bearing in order to clean up their platforms. Any lever that is built to limit negative externalities must also have an accompanying cost, and companies should roll those up to tell macro stories about their efforts more consistently.

Tech must do all of this while continuing to invest heavily in machine learning that can better classify positive and negative engagement. Simultaneously, they have to make difficult product decisions that have real revenue impacts. Fact-checking and reactive enforcement are simply bandaids—the difficult decisions need to be made systemically in order to limit damage on platforms operating at such scale.

Content Moderation

The second challenge is arguably more difficult to solve. Moderating online content balances safety and freedom of expression in the midst of intense technological challenges. Optimizing one side of this equation often comes at the expense of the other. Drawing the line between community protection and free speech is difficult but necessary. Luckily for big tech, they mustn’t bear this cross alone—government and civil society should play a part in this conversation.

Free speech is not a universal concept. What is permissible in Canada is not tolerated in Thailand, and it would be naive to espouse a universal set of values for a global community. Surely, there are clear areas of agreement, but the challenge is always in the middle. Unlike algorithmic distribution (which is a product choice in the pursuit of profit), content moderation makes individual decisions on whether or not pieces of user-generated material are allowed on the platform at all. Removing terrorist praise and support is an easy decision, whereas adjudicating hate speech is much more nuanced depending on the context.

Joint partnerships are required in the pursuit of safe communities online. Without borders, one set of global rules is going to run afoul of traditional government domains like speech protection. As tech firms grow increasingly powerful, the decisions taken on content have broader ramifications. The decision on how to treat a world leader’s content becomes very impactful when said world leader draws much of their power from social media.

The complexity in this space will undoubtedly require nuance and collaboration. However, firms making decisions of their own accord becomes less tenable as they grow in influence. Smart regulation is required to provide guard rails for private companies in the domains that government typically mandate. Tech companies cannot decide, with impunity, what is worthy of hosting on their sites when the ramifications are profound. I won’t pretend to understand the universe of potential solutions in the regulatory realm, but it cannot remain the sole purview of private corporations to facilitate this process.

Starting Somewhere

Algorithmic distribution and content moderation are just two of the many complex domains tech giants navigate, but they underpin many of the societal challenges embodied by the industry. They are not impossible to solve—merely exceedingly difficult. It will require real sacrifices with the faith that long-term stability is worth foregoing a degree of short-term profit. It will also require serious partnerships between industry, government and civil society to write rules for online speech in the 21st century.

We have passed the epoch in which Section 230 sufficed as governance for the internet. Continued reliance on that article will only lead to further concentration of power in tech’s hands. These issues are sure to unfurl over the next several years in the aftermath of Trump’s presidency. If anything, it is clear that self-governance and good intentions on the part of big tech have run their course.

--

--