Chapter 7: Antitrust and Consumer Protection in the Age of Artificial Intelligence

Michael Fischer
Stanford Law: Regulating AI
28 min readOct 28, 2020

By Mikey Fischer and Shreyas Parab

Artificial Intelligence as the Modern Day Railroad

Silicon Valley was built on the shoulders of many giants. From young rebels building products in the garage to the innovators at Fairchild building chips that make up the fundamentals of computing, Silicon Valley credits its success to many of these builders. However, tracking the origins of Silicon Valley to its earliest innovator begins somewhere with the story of Leland Stanford. Although some know Leland Stanford for the university that bears the family name, Stanford was one of the progenitors of the transcontinental railroads while the President of the Central Pacific Railroad. It is important to note the complex and controversial history of Stanford’s business practices and the important contributions of Asian labor workers who built the railroads. Although much emphasis is put on Leland Stanford himself, in this chapter we will consider the more important part of this story: the railroads. The railroad that connected the East and the West was transformative. The railroad connected ideas, people, and commerce across the country in an unprecedented way. But railroad was a manifestation of power in every sense of the word. The railroads dictated: which local markets saw a flourish of tourism and trade; how public funding could subsidize transportation; what level of connectivity a place had with the rest of the country; and more! Railroads became the subject of heavy scrutiny once the public realized just how powerful this infrastructure would play in the development of the industrializing country. The products of Carnegie and Rockfeller depended on the railroad infrastructure and the railroads went from being viewed as an innovation to a fundamentally essential infrastructure that the country needed to survive.

When the government started regulating railroads, people got up in arms afraid that the government involvement in these private companies would render them powerless and de-incentivized to innovate. They believed that by placing restrictions on the railroad companies, the government was tipping the market forces in play to favor some and crush others. When railways became federalized during World War I, many worried about the drastic action of seizing private property, but when the railways were restored to their rightful owners, it raised questions about the necessity of railways for a functioning and strong America. Today, although most railways in America are private, many continue to receive heavy subsidies from the government drawing much controversy. Just as the railways continue to play a role in America one hundred years later, the longevity of these decisions should not be understated. The decisions to regulate or not set a precedent that last far beyond a single person’s lifetime.

In a similar way, the internet is to the 21st century what railroads were to the 20th century. Just like railroads, the internet is a critical, powerful infrastructure that connects the world’s ideas and commerce. Unlike railroads, however, the internet comes with an immense power of decentralization that makes it hard to concentrate power. The internet was built to be a tool for everyone and for no one single body to rule over it. Although powerful private companies exist that exert a disproportionate amount of control over the internet, the internet relies on free market principles. Artificial intelligence on the other hand is centralized and monopolistic in nature. Building AI tools and systems require data and whoever has the best dataset has a disproportionate advantage over any new player. The whole point of machine learning and AI is that the model grows exponentially quickly to improve and an advantage or parameter adjustment can increase its efficiency and effectiveness in orders of magnitude. Therefore, the regulation of artificial intelligence to ensure a fair, level playing field becomes increasingly important. Small advantages in AI compound to become an unbeatable force that makes it incredibly hard to overthrow. Indeed if the government wanted to regulate railroads to ensure that consumers are protected and the markets are fair for everyone to compete without monopolistic or trust building activities, in the modern day they will have to tackle the incredibly hard task of regulating artificial intelligence to create a fair market place.

The Basics of Antitrust Law in History and Today

Although the Sherman Anti-Trust Bill in 1890 was considered to be the first federal legislation around antitrust law in American history, a shark without teeth is no different than a goldfish. Sherman’s bill, motivated by a myriad of reasons, was neither specific enough nor strong enough to delineate when companies were in violation with it or whether it was a natural part of their business giving courts a high degree of latitude to interpret the act and decide how to enforce it.

The Sherman Antitrust Act provides that:

“An Act to protect trade and commerce against unlawful restraints and monopolies…..Be it enacted, etc., that every contract, combination in the form of trust or otherwise, or conspiracy in restraint of trade or commerce among the several states or with foreign nations is hereby declared to be illegal.”

The act was passed 51–1 in the Senate and unanimously in the House, the Sherman Antitrust Act also vested the power to dissolve trusts in the federal government. Unsurprisingly in its passage, the lack of clarity meant that cases of the government trying to exercise the Act didn’t have targeted results. Ironically enough, the Act was used initially against labor unions which were considered illegal and monopolies on labor.

In fact, the Sherman Act was used in many cases as simple as a contract between two interstate companies. Courts realized quite quickly that really any simple contract technically “restrains trade”. Therefore, to test whether any specific manifestation of these trade restraints would be considered a trust or monopoly under federal antitrust laws, the courts would have to prove one of two things. Either the trade restraints present in the market break “per se rules” or violate the “rule of reason”. These two frameworks of applying antitrust principles dominated the early 20th century and continue to be the basis for much of antitrust law today.

“Per se rules” are those that are inherently anticompetitive and damage the market in such a way that simply proof that these activities took place is enough to fall under the Sherman Act. Basically, the only burden the plaintiff in the case has is to prove that conduct like horizontal agreement to fix prices, rigging bid processes among competitors, vertically integrating the supply chain, or allocation arrangements might fall under this category. The “per se rules” are perhaps the strongest cases that offer a more clear cut view of monopolistic behavior that the courts have a much easier time dealing with.

The “rule of reason,” however serves as a guide for a more ambiguous region, when the business doesn’t perhaps fall directly under the “per se rules” but the force it plays on the market results in the same effect as a monopoly of restraining trade or limiting competition. The “rule of reason” test requires the plaintiff to show deep analysis of the definition of the relevant product and scope of the market, the degree to which the defendant controls the market, and the existence of anticompetitive effects both harmful and beneficial to the consumer. The problem with this interpretation of the Sherman Antitrust Act was that it offered the courts more latitude for interpretation than other laws. Instead of interpreting exactly what was in the law, Judge White also took into consideration the legislative intention behind the law. At no point in the Sherman Act does it actually differentiate between reasonable and unreasonable constraints, nor does it set actual standards for what constitutes a monopoly. In other comparable legislation, Congress can set specific numerical, quantitative boundaries by which the Act should be understood, but in this case they didn’t. Thus, the Courts created their own. In his majority opinion, White wrote that the best indicator of the necessity of applying the Sherman Act be the effect on competition.

Explicitly put, the “rule of reason,” would classify: “all contracts or acts which were unreasonably restrictive of competitive conditions, either from the nature or character of the contract or act, or where the surrounding circumstances were such as to justify the conclusion that they had not been entered into or performed with the legitimate purpose of reasonably forwarding personal interest and developing, trade, but, on the contrary were of such a character as to give rise to the inference . . . that they had been entered into or done with the intent to do wrong to the general public and to limit the right of individuals, thus restraining the free flow of commerce and tending to bring about the evils, such as enhancement of prices, which were considered to be against public policy.”

The “rule of reason” was extremely controversial at the time across the political spectrum. In fact the judicial decision caused 78-year old Justice John Marshall Harlan to write perhaps one of the most strongly worded opinions of the Court. He accused White of misrepresenting previous Supreme Court cases to fit the rule of reason and that Chief Justice White was using his power to infringe upon the legislative powers of Congress by essentially engaging in “judicial legislation” where instead of interpreting the law, they are creating their own.

Unfortunately, it was not just this extremely liberal Justice that was outraged at the decision to utilize “the rule of reason”. The business community worried that if the “rule of reason” became the de facto approach to enforce the Sherman Antitrust Act, that any evidence of an adverse effect on competition could be welcomed in the trust breakers. The Progressives were upset at the interpretation because they saw the “rule of reason” as being easily manipulated and affected by the political and economical leanings of the court which they believed could lead to the severe under-enforcement of the antitrust laws. Regardless of political affiliation, the “rule of reason” caused controversy because of its complex usage and the high degree of interpretation required for the Justices to have in the fields of business and market dynamics. Critics of the way antitrust cases were being handled (including several Justices in lower courts) rallied around the Congress to pass more comprehensive legislation to supplement the Sherman Antitrust Act.

Indeed after years of additional cases brought to courts following the decision in Standard Oil, Congress decided to pass two more hallmark pieces of antitrust legislation: the Federal Trade Commission Act and the Clayton Act. The Federal Trade Commission Act was essentially a parent class of legislation such that any violation of the Sherman Act also violated the FTC act. In essence, it created an agency called the Federal Trade Commission which was the only group that could bring cases under the FTC act. It standardized the process of charges of antitrust to go through the agency first before overloading the courts. It also helped reach cases that might harm competition, but do not neatly fit into categories of conduct formally prohibited by the Sherman Antitrust Act (essentially “per se” violations). The Clayton Act discusses the practices of mergers and acquisitions that might restrict trade, lessen competition, and create a monopoly. Even today, many companies are continually investigated for anticompetitive practices and questions regarding antitrust in this country come up and because of the actions of the Supreme Court and the Congress, the guidelines and frameworks to appropriately address them exist.

The largest challenges around antitrust law is that there is a predominant focus on only a few metrics for consumer welfare. For the most part price is the driving measure of how protected the consumer is. Even more so, it is an incredibly expensive and time-consuming process to prove market coordination in an age where there are millions of ways to communicate intention or common interests. In fact, when people accuse Amazon of being a monopoly, it becomes hard to pinpoint a specific market dominance because of its incredibly diverse business interests and ability to constantly drive down prices for customers. Customers love Amazon and constantly reap benefits from its business practices from free 2 day shipping to a never ending selection of content.

At its core antitrust is a crucial mechanism of control of market economy dynamics because generally monopolies are harmful in the long run for consumer price, the cannibalization of competitors, and gain unprecedented political power. However, with the modern day trust, price towards the customer does not reflect many important factors. For example, for many years environmentalists have argued that current pricing schemes do not reflect the societal cost that products have on the environment. In a world where individual’s privacy and data has become a product in its own, many argue that nouveaux business models like that of Facebook and Google do not consider impact on consumer welfare in the appropriate metrics… especially since most of their core services are completely free.

The Justice Department, FTC, and states are tasked with the behemoth task of monitoring monopolistic behavior in America and are constantly exploring practices of private enterprises. It is important to appreciate the massive task that the antitrust division of the Justice Department undertakes…. It is not an easy job, considering the amount of purported trust activity that occurs on any given day. The investigations into these practices can take months upon months to complete and are not guaranteed to meet the vague “rule of reason” boundaries that often dictate these cases.

Case Study: US v Airline Tariff Publishing Co

If you have ever tried researching prices for a flight, you may have encountered the crazy complex and intricate pricing models of airline companies. Airlines have an incredibly complicated system and procedure for setting fares based on a variety of factors including the number of booked passengers, the number of times you have searched for flights, and the popularity of the flight plan during the specified window. A company known as Airline Tariff Publishing Company (ATPCO) was formed as a spin-off from the Air Transport Association of America which was formed in 1945 to publish the cost of airline tickets across the country. In the 1980s, ATPCO started developing a database of airlines cost across their partner organizations which included dozens of airlines from American Airlines to Delta, to United, and to a bunch more! In fact, ATPCO is partially owned by a combination of fifteen airlines so they have a centralized, connected interface for airfares. Essentially ATPCO was the centralized clearinghouse for fare information across the industry and essentially dominated the market. Of course, the result of this centralized, highly automated process of collecting the standard rates across the industry made it possible for airlines to start colluding on prices and algorithmically price fix without any explicit communication or collaboration by competitor firms.

In 1992, the United States Department of Justice Antitrust division charged that the ATPCO became a mechanism by which airlines could artificially raise prices and limit the competition in the airlines industry. The DoJ had gathered a large number of instances in which carriers through public announcements of price changes went back and forth till the fare not only was the same, but would change on the same day. In fact, they even had communication records from airline internal reports which included explicit reasoning that cited competitor’s pricing as the motivation to change or not change prices. The DoJ had built a pretty strong case, but the airlines countered with the fact that none of these pricing decisions were made in explicit communication with each other with no direct communication between the airlines. They argued that although pricing rationale depended on that of competitors, that this practice was still independent businesses making decisions based on changing market dynamics.

The key difference in this case versus those before it was clear. That it was through this mediated, highly automated software that airlines could communicate and price fix without the explicit coordination of people. Essentially they had begun abstracting price and the instantaneous nature of ATPCO allowed firms to constantly change, adjust pricing information and directly have access to the information of the firms.

Unfortunately the ATPCO case never made it to trial and the DoJ settled with the airlines by agreeing not to exchange additional information beyond basic fare information and that they cannot link fares together nor pre-announce price increases. This meant that colluding through ATPCO became harder, but still not impossible. The DoJ made clear concessions on scenarios that did not violate the settlement, but could largely be seen as price collusion. The case was pivotal not because of the precedent it set, but the kinds of questions that arose as a result of the highly publicized case on what collusion and monopolistic behavior looked like in the modern age of information in which information was being exchanged instantaneously and that could automatically react to price changes in the market algorithmically. The situation continues to be an important topic of research because it lies in the frameworks by which similar cases would be brought forward by the Department of Justice across not just the airline industry, but across many different markets as well.

Case Study: United States v. David Topkins

After four arduous years of prosecution, the US Department of Justice Antitrust Division successfully charged Daniel William Aston in connection to collusion, price-fixing schemes in the sale of posters on an online marketplace. The Department of Justice touted this win as it was the first time that they had successfully regulated the online markets that have emerged to dominate the American market. In the eve of any innovation or fundamental shift in commerce, the regulation takes time to form and the Department of Justice emphasized they would use this case as the domino to help prevent anti-competitive business practices on online marketplaces.

Aston and Topkins sold posters on the Amazon Marketplace which is a service that allows sellers to connect with consumers and transact directly on the platform. Amazon Marketplace gives total control to the sellers to decide price and shipping procedures. Amazon Marketplace ranks the products on their site based on relevance to the user and a number of unknown characteristics and higher ranked products are more likely to be bought by customers.

In order to boost sales, Aston and Topkins developed a computer-based pricing algorithm that would collect the price data of competitors and set all of their prices slightly below the price based on fixed rules that applied to all of their products. Although this was beneficial to the end customer in a lot of ways by lowering price, it reduced competitiveness in the market. These separate businesses who all used this service now experienced less competition between each other to reduce prices. Usually, the traditional notion is that in order to meet competitor’s lower prices, a business has to offer that same/higher quality product at a lower cost. Since all of the colluders knew that they would never have to fight against each other, they never had to unnecessarily decrease their price.

This clearly violated the Sherman Antitrust Act because although humans did not necessarily come together and decide on the prices themselves, they indeed act in coordination to systematically fix prices. Although the penalties for Aston and Topkins were quite small, with only a fine of $20,000 for Topkins and $50,000 for Aston, it set the precedent for the DOJ to continue to regulate the effect of algorithmic pricing on competitiveness in a digital world, specifically algorithmic pricing driven by artificial intelligence.

One important distinction to make here is that it was the explicit coordination among competing firms to use the same software that made this a violation of antitrust law. If each firm individually and independently decided to use the same-pricing algorithm/product and arrived at the same price, it would not imply collusion or price-fixing.

This case has set the tone for the rest of antitrust regulation in the virtual, global e-commerce market. Since, we have seen little action taken by the DOJ on this issue, but there have been recent inquiries and consumer lawsuits filed against companies like Amazon regarding antitrust issues within the markets. Many see this angle of algorithmic pricing as a mechanism by which a case might be brought against Amazon to help break it up the same way Standard Oil was in the early 20th century.

What concerns do we have when a product or service becomes concentrated with one party? Both within the context of a traditional business and within a business of AI. Traditional monopolies developed by accumulating large swaths of money or infrastructure to be able to exert control over others. What is the modern day railroad that is being wielded by tech companies. Is it access to large amounts of data that was either bought using large amounts of money or collected using a fleet of self driving cars? Is it access to large server farms with advanced GPUs that only one company has access to thereby keeping out competitors from being able to do similar types of computation.

Existing antitrust regulatory setups were built in the day of regulating big industries such as oil farming. This type of regulation focused a lot on broad measures such as the size of the company. Today, though, there are concerns that have little to do with the size of the company. Big data, for example, poses a new question in what it means to have a monopoly.

Collusion Concerns

One of the biggest concerns with AI with respect to antitrust is that it will allow for collusion in pricing in a way the existing legal system is not set up to regulate. Collision is a non-competitive and secret pricing agreement between rivals which tries to disrupt a market pricing equilibrium. It involves companies that would typically compete with each other but that have an agreement to conspire to collaborate so as to get an market advantage.

Big data for example allows for companies with large amounts of data to create large databases of information. However, issues can develop when artificial intelligence is designed to allow for collusion to happen when prices change automatically in response to competitors changing their price. What makes this difficult is that AI programs are able to communicate independ of human interaction. This is problematic because existing antitrust laws most take into account human intent and action. As humans become less involved in pricing it could make it difficult to prove intent when prices, as we will see in US v Airline Tariff Publishing Co. later in the chapter. Typically this might look like where one algorithm becomes the influencer and is a hub around which other prices revolve around. The other algorithms exist in parallel and always update their prices around the hub, known as tacit collusion. Regulation becomes even harder as humans move toward the background and technology steps into the foreground.

Making things even more difficult is proving that that a bot is colluding with another bot, as unlike humans, bots do not generate emails and other communications that can be used as evidence in court.

Merger Concerns

AI poses a number of concerns around mergers as well. When determining if a company should be allowed to merge or acquire another company, regulators examine two questions. Firstly, will the merger lead to a horizontal monopoly, acquiring competitors so as the choice of products becomes limited. Or will the merger lead to a vertical integration, a mergers between buyers and sellers to create synergies in business and cost savings, such that down the road, it will limit competition. For example Time Warner, a content producer and Turner Corporation, a content distributor, meged, but the concern was that Turner Corporation will only broadbase Time Warner content even when better options exist, thus limiting competition. Secondly, regulators look at, as a result of horizontal or vertical integration, will this enable collusion or exclusion in a market.

As this applies to AI, when two large companies merge, when they combine their data, they could gain an unfair advantage over competitors. For example, when Facebook bought Instagram, they were able to combine their database to create a model of users behavior for adversitzers to use that would not be possible for competitors to compete with, and which also squeezed out smaller companies. With this increase in market power, it becomes harder for smaller companies to compete as they have a larger data set to start with and the rate at which their data grows is faster.

Innovation Concerns

The goal of antitrust policy is to promote a vigorous and competitive market place. When rivals spur each other, new and innovative products emerge when companies compete for consumers’ money. The larger the reward there is to innovate, the greater the incentive there is for companies to produce more. Innovation is a matter of trial and error and the maximum more companies there are trying to solve a problem will result in a better solution. As companies grow old, they tend to be good at only one thing and pretty bad at everything else. With these unique specializations, what might be easy for one company to do could be very hard for another company. With more skillsets and companies, there is more innovation for pressing problems.

In the previous example we examined how the big data used in AI is a cause of concern for consumer protections and antitrust. However, there are other concerns as well. In the context of innovation, labor and leadership come into play. When a company merges the people typically realign their viewpoint to match that of the parent company. With fewer points of view, there is less competition in the market and consumers suffer.

Responsiveness to Consumer Concerns

As consumer preferences change, it is ideal if the market changes with them. If there are just a few companies that exist new and innovative products will fail to emerge to fill the consumer demand.

In the context of AI, a worry is that overly large models will be built. Typically as an AI model becomes larger it loses some of its nuance in determining between cases, especially so for edge cases. Take for example, an insurance company that is using an AI model to predict insurance rates. The company might not notice that scooters are an important business segment that they should be focusing on and providing a unique service for and are instead lumping them in with motorcycles. If there were multiple companies, a smaller entrant might come in and offer such a service.

With all these concerns that we have for consumer protection. How do we create a regulatory landscape that leads to a vigorous and compeeitive market while still not imposing too much regulation so as to stifle innovation? We have gone over the examples of how regulation intersects with AI when it comes to big data, labor, product point of view. Other potential problems exist also.

While the big data problem is new and requires us to rethink what it means to be a monopoly, other problems are as old as the antitrust laws are themselves. The physical means by which products are produced has always been of concern. AI requirements significantly computational power than any other previous technology. Computational power is very expensive and requires state of the art computer hardware specially built for AI, computer infrastructure such as hard drives and networking equipment, physical data centers, and access to a lot of cheap electricity to run everything. Consumer regulators have to make sure that large amounts of computation infrastructure do not become consolidated under only one company. If such a thing were to happen, it would make it hard or even impossible for others to compete.

Price, Market Power, and Determining Who is Selling What

Antitrust law revolves around price and price fixing. Because of the dynamics of AI, where there is a high fixed cost associated with getting the system up and running and a low marginal cost of providing the service to each individual customer, many companies opt to give their product away for free and to support it through advertising. Thus the people that the product, don’t actually explicitly pay for the product, instead they pay for the product by viewing ads or have the company sell the data that they produce on their platform to other companies.

When a product is free for consumers, what does it mean for a monopoly to use their power to price the product? For a company such as Google, how do we determine consumer welfare using the metric of price when the product is free. Instead of looking at the people that use the service as the customer, we need to flip it and look at the advertisers that purchase ads from Google as the customer and the users data as the product. Because there are only a limited number of search engines to advertise, Google can prop up the cost per ad compared to what it would actually be if there were competition in the search engine market. To counter this argument, Google claims it is not in the search engine advertising business, which it owns a large part of, but rather part of the much larger “advertising” business, which it only is a small percentage of and thus not a monopoly.

From the consumer’s perspective, another possible argument would be to explore the idea that in a competitive market, people would expect to be paid for their data. Could there be such a thing as reverse pricing power? Even though consumers are not paid for their data, they SHOULD be paid for their data, and thus companies are exerting pricing power by keeping money from consumers.

Going forward, we need to think more broadly about how consumer welfare is affected not only by price but also by data. The current focus only on price seems to allow companies to skirt the intended consequence of the rules, which is to protect competition and not competitors.

Companies like Amazon use this to their advantage by lowering prices in the short term, but the question remains as to long term, once maybe competitors have gone out of business, will Amazon begin to start raising their prices. At which point, Amazon will have built out so much existing infrastructure that will be impossible for other companies to complete.

In this chapter, we focused on the different mechanisms by which consumers can be protected through antitrust legislation and regulation.

Big Tech and Antitrust in the 21st Century

One important thing to understand about antitrust law is its limitations and focused scope. Antitrust law was not created as a one-size-fits-all cure for unbalanced reserves of power among stakeholders. In fact, one of the biggest misconceptions of antitrust is that it is there to protect the small fish from the big sharks in the market. In fact, it is quite the opposite, as Professor John Mayo and Professor Mark Whitener write in their Washington Post piece, antitrust law fosters competition. In fact, the Supreme Court has explicitly stated that their purpose is “not to protect businesses from the working of the market; it is to protect the public from the failure of the market.” The markets are competitive by nature and there will be those who have a superior product or service. If the superior product or service, however, starts negatively impacting the customer and the customer lacks the ability to choose a different product/service because the mechanisms by which dominance is achieved that raises strong antitrust concerns.

The flywheel nature of technology, meaning that the technology improves itself which improves itself creating an exponential growth in quality over time, means that the time period in which dominance and market share is achieved has decreased significantly. For example, it took Walmart almost 70 years to become a global retail leader whereas Amazon was able to grow to almost 2.5x the value in just 26 years. This means that antitrust laws have to be reactive enough to adjust to the rapid innovation of technology, while also not impeding the great leaps of innovation occurring. It is not an easy task, but regulators are actively working on addressing the regulation of technology companies. Many Presidential candidates across the political spectrum have advocated for the breakup of “big tech”, but have seen very little action taken to restrict these companies.

It is important to understand, however: big does not mean monopoly. There could be even small firms engaging in antitrust practices. For example, US v. Topkins was on the order of magnitude of hundreds of thousands of dollars, whereas companies like Amazon operate on the magnitude of almost a trillion dollars.

Time will tell how antitrust laws are used to apply to the technology companies that dominate the markets currently and although antitrust laws may need to have slight adaptations/interpretation and adjustments to include for nuances only found in the modern technology-contexts, but largely antitrust law has survived many tests of time, just as the Constitution has.

Big Data And Big Platforms: How data privacy can be used to promote free and fair competition in the marketplace

Are there ways that antitrust law can be used to regulate the build up of data so that one platform? We’ve covered the traditional means of enforcement in the Sherman Antitrust act and the Clayton Antitrust Act. There are other ways to ensure that data is spread equally too. Section 230 of the Communications decency act gives internet platforms such as Facebook and Youtube immunity from publishing users’ speech and also immunity from censoring users’ speech. Given these immunities, Section 230 has allowed technology companies to grow to massive sizes by enabling the billions of people around the world to post their content. When the creation of data is distributed, but the collection of data is centralized, platforms have the ability to grow quickly. Clearly there are huge benefits in enabling all these people to use and communicate on these platforms.

However, with so many people on one platform, there are strong network effects that prevent other companies from entering the market. If people only post videos to YouTube, then people only look for videos on YouTube. Advertisers will then only sell ads on YouTube. With such a large network effect, it makes it hard for other companies to develop similar software and user bases to support a sustainable business. Companies can start to use data as a moat to prevent new businesses from forming. In an ever increasing data-driving society, when companies merge we need to consider how consumer data and usage can be used to prevent competition.

We look at Section 230 as a way possible to make companies more responsible for the content they host, keeping them smaller, and thus allowing more competition in the market to occur. To understand Section 230, let’s take a look at how it came about.

In 1995, an anonymous person wrote on a Prodigy message board that Wolf of Wall Street’s investment firm, Stratton Oakmont, is corrupt and engaging in fraudulent stock trading. The Wolf of Wall Street sues Prodigy for defamation (Stratton Oakmont, Inc. v. Prodigy Services). The lawsuit focused on an important issue that we are dealing with today every time YouTube, Instagram, or Twitter takes down content. Should companies be held accountable for what their users post?

Prodigy and the Wolf of Wall Street each have a convincing argument. The Wolf of Wall Street argues that Prodigy is a publisher of the posts. Prodigy is no different from how a publisher of a book can be held liable for defamatory content they publish. In common-law, a person who publishes a defamatory statement by another person has the same liability for the statement as if they had initially created it. The Wolf’s argument is clear, Prodigy is a publisher that published libelous content and should be held accountable.

Prodigy argues the opposite. Prodigy says a publisher is a service that exerts editorial control over content. It claims they are a platform where people put their content and transmit it, like a telephone company. Prodigy claims it transmits information and should not be held liable for what other people say. If someone says something defamatory over the telephone you don’t sue the telephone company (Cubby, Inc. v. CompuServe Inc.).

The Wolf of Wall Street fires back. Prodigy message board is not a platform because there are no rules about what a user is allowed to post. With Prodigies’ message board, there are posting guidelines for users and screening software to automatically remove posts with offensive language. These are examples of editorial control.

The court held that these guidelines constituted “editorial control” and opened Prodigy up to greater liability. Ultimate the Wolf of Wall Street wins and the post is removed. The difference was that Prodigy engaged in content moderation by screening the posts, and should thus be held liable for defamatory posts.

The Solution: Section 230 of the Communications Decency Act

Back then, the internet was still young and everyone realized that in order for the internet to grow there needed to be more certainty around who was liable for content posted online. Having a lawsuit every time someone posted something online was not sustainable. Regulators wanted to offer more certainty for online services to develop. Regulation would shield internet companies from lawsuits and would provide some legal certainty to encourage investment and innovation.

Senators Cox and Wyden introduced a bill called “Section 230 of the Communications Decency Act” which stated that the “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. This sentence is often referred to as the 26 worlds that created the internet. By immunizing online services from lawsuits over material that users upload, regulators hoped to encourage companies to feel free to adopt basic conduct codes and delete materials that companies thought were “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”, as long as they acted “in good faith”. One slight caveat is that federal criminal activities has a special carveout and is never allowed (which is ultimately how Backpage .com was brought down because of federal sex trafficking laws.)

Companies now have a lot of leeway in how to interpret “otherwise objectionable” which allows them to take down or moderate all sorts of content. Examples include the Plandemic video on YouTube, recently Trump’s tweets on Twitter, and basically anything related to a nipple on Instagram.

While I don’t agree with the censorship these companies have done, the Plandemic video should have never been taken down, I do agree that these companies have the right to host whichever content they deem fit.

Social media companies depend on users to keep going, which they need to balance with advertisers. if there is grotesque content on a platform people will be driven away. Then advertisers are driven away. Ultimately social media companies’ only responsibility is to maximize profits for shareholders in any way they choose and to have autonomy in how they want to define that. It is like in a bar if there is someone that is going around to different patrons and yelling at them, the bar owner is going to throw them out. Not necessarily because they disagree with what the person is saying, but because if they allow them to continue everyone is going to leave.

PragerU, a conservative online “University”, sued YouTube (Prager University v Google) when their videos “Why Isn’t Communism as Hated as Nazism?” and “Are 1 in 5 Women Raped at College?” were flagged (not too different from what Twitter did to @realDonaldTrump). PragerU argued that YouTube flagging its videos was “illegal censorship” under the First Amendment. Unsurprisingly, PragerU lost as the First Amendment applied to government censorship of public speech and not to private companies and their users.

What to do now? More regulation? Antitrust?

Section 230 did a good job for Web 1.0 and 2.0. As we enter Web 3.0 though we need to rethink what is the best way forward to prevent the build-up of data that prevents fair competition.

More regulation:

Create a governing body that controls what can and can’t be taken down. Who gets to be on this body is the hard part. Long term this solution would not work because it would suffer from the same problems of redistricting / gerrymandering. Each party would try and co-opt the board and we’d be in an even worse place than we started.

Distributed Technologies:

All of the social networks today are run by a single company. This company then decided the rules for everyone. Future social networks (Mastodon, Bitchute, Gab, Libry, Parler) could be fully distributed. The benefit with these is that it is like Bitcoin and nobody is in control and nobody can take it down. Additionally this can be tied to a crypto token so that content creators are paid for their work. The software is still hard to use but I’m pretty sure this is the direction we are headed in.

Antitrust:

We need to rethink antitrust laws. Instead of just looking at price when determining consumer welfare we should also consider data. Current antitrust law currently only revolves around price and price fixing. This does not really apply to social networks. Instead we need to start thinking about “information antitrust” or “speech antitrust” where one network is able to restrain “speech” instead of “trade”. Antitrust law is from 1914, it is time we update it. This would break up companies such that if YouTube took down your video you could still have another place to post it. Then there would be a YouTube for Democrats, a YouTube for Republicans, and a YouTube for Libertarians. Each flavor of YouTube would still be able to decide what is best for its users. And users would be able to still have options to make their voices heard.

Interoperability:

Regulation might not be needed if systems are interoperable. In some cases it might be beneficial for companies to share data or sell data which would allow newer companies to get started more easily. If data were available from multiple companies, there would be a marketplace of data from which new companies could purchase it in the same way that they purchase other products when starting out. Ideally, there could be a data format that would allow for easy interoperability between systems.

--

--