Followups: The future of free speech (Parts I and II)

Anthony Bardaro
Annotote TLDR
Published in
33 min readMay 23, 2019

--

The following highlights provided by Annotote: Don’t waste time and attention, get straight to the point. All signal. No noise.

Wither the Consumer: Content consumption is obsolete and the problem is a matter of supply and demand

by Annotote TLDR 2016.10.26

The Four Winds of Modern Media: Your new playbook for a new era

by Adventures in Consumer Technology 2017.09.04

The content police: Cyberabuse, bullying, fake news, and reductive solutions

by Annotote TLDR 2017.12.07

Social media’s problems are easy to identify, but its solutions are hard to implement.

Mastodon and the pursuit of a utopian online community: There is no panacea?

by Annotote TLDR 2018.04.20

See also: Mastodon Followups

The future of free speech (Part I): The special internet standard, independent arbitration, and the legal pillar of social media regulation

by Adventures in Consumer Technology 2019.04.15

The future of free speech (Part II): Trustless Verification, Information theory, and the social pillar of content moderation

by Adventures in Consumer Technology 2020.08.17

The Third Pillar: Big Tech Living Wills

by Anthony Bardaro (Stratechery Forum and Twitter) 2022.11.07

The argument for Trustless Verification specifically (and the Twin Pillars more broadly) is:

1. The introduction of extrinsic (dis)incentives

2. The shifting of content moderation decisions to democratic principles and due process that can cope with internet scale/velocity

To supplement this, I think there’s a way for the government to regulate the means of content moderation without regulating free speech. For example:

The US government should not regulate free speech, but still could assure that scaled user-generated content (UGC) platforms meet minimum standards, including:

1️⃣ democratic governance (i.e. voting rights and power distributed among shareholders/board/management); or

2️⃣ checks-and-balances (e.g. like Facebook’s Independent Oversight Board IOB); or

3️⃣ documented moderation decisions (e.g. formal meeting minutes that qualify the factors and processes involved in deliberating content interventions)

Platforms don’t have to score 100% on all three criteria, but their score across all three criteria need to sum to 100%. For example, if a founder/CEO is the board chair with supervoting shares, then his/her platform’s governance score would be really low (1️⃣≈0) and the platform would need to have some combination of content oversight outsourced to a highly independent moderation system with empowered with veto authority (2️⃣ between 50–100) and really detailed authenticated documentation (3️⃣ between 50–100) in order to pass the regulatory test.

This all fits the paradigms/precedents of 1A and Section 230, and it passes that crucial robustness/antifragility test for ‘could your worst enemy weaponize this regulation to infringe upon your free speech?’ (no he/she could not!).

Finally, this is all inspired by Federal Reserve requirements for systemically important financial institutions (SIFIs) under the Dodd-Frank Act’s Reg QQ (Living Wills aka “Resolution Plans”), specifically “Governance Mechanisms” (section IV) for big banks.

That does not obviate the need for my Twin Pillars — the Independent Arbitration System and Trustless Verification — but all three of these institutions combine represent a pretty robust proposal for improving the digital commons.

Of course, some of the challenges therein would include:

1. what is an objective threshold for delineating between “scaled UGC platforms” who are subject to these standards vs those who are not?

2. what are the standards and definitions that regulators should anchor on to when converting qualitative assessments into quantitative scores? (e.g. what is the category prototype for a 6/10 or 60% score on 1️⃣, 2️⃣, or 3️⃣?)

3. etc

Exporting the First Amendment: Judge rules that foreign users of US-based social media platforms have the right to remain anonymous

by Gizmodo 2019.05.23

[The defendant] is an active participant in a former Jehovah’s Witness subreddit as he “believes that it is the only place he has been able to discuss and debate matters related to the Jehovah’s Witnesses freely and openly” and that he chose Reddit because he could post anonymously as “keeping his name and identity private is necessary for him to feel comfortable participating in open discussions…”

Where it gets interesting is [this defendant] is not a U.S. resident — so [the plaintiff] argued free speech protections, and thereby [his] right to anonymity did not apply. However, the court ultimately disagreed with that logic, reasoning the subpoena was issued by a U.S. court, on behalf of a U.S. company (Watchtower), and delivered to another U.S. company (Reddit). The court also stated that the First Amendment “protects the audience as well as the speaker,” and since a good portion of Redditors reside in the U.S., then free speech protections do apply.

That’s huge. The internet is borderless and plenty of U.S.-based sites like Reddit, Yelp, Google, and Facebook have users scattered across the globe… That said, this case wasn’t a total victory for free speech and anonymity advocates. The court’s final decision still ordered Reddit to deliver [the defendant’s] identifying information to [the plaintiff’s] lawyers so they could try to shore up its copyright claim [although the] attorneys were forbidden from revealing his identity to their client[.]

Re: “The borderless, supranational nature of the web would also present a challenge. A principally geographic arbitration solution is hard to enact amidst issues of political jurisdiction and VPN loopholes. Let’s call this the ‘geopolitical problem’.

7 Principles for Lawmakers Reforming Section 230’s Liabilities and Protections (UGC Safe Harbor)

by Truth on the Market 2019.07.11

This morning a diverse group of more than 75 academics, scholars, and civil society organizations… published a set of seven “Principles for Lawmakers” on liability for user-generated content online, aimed at guiding discussions around potential amendments to Section 230 of the Communications Decency Act of 1996 [CDA]...

Oberdorf v. Amazon appeals opinion offers a good approach to reining in the worst abuses of Section 230

by Truth on the Market 2019.07.17

[T]he Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace [within] context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties… This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today.

The response to the opinion has been mixed [e.g.] “are we at the end of online marketplaces?” [but] the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield… I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct…

[I]f there are readily available ways to establish some form of identify for users — for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses — and there is reason to expect that users of the platform could be subject to suit — for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable — then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense…

The proposal offered here is not that platforms be able to identify their speaker — it’s better described as that they not deliberately act as a liability shield. [It also] make[s] sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes.

N.B. This was subsequently reinforced by a California appeals court (from Bolger v Amazon):

Amazon can be held liable for defective products sold on its Marketplace in California, an appeals court ruled... The California Fourth District Court of Appeals reversed a 2019 trial court ruling…

A lower court ruled in 2019 that Amazon was not covered under product liability laws. The trial court also ruled that the Communications Decency Act would not have shielded the company from Bolger’s claims under California state law. Bolger appealed that ruling, arguing that in California, strict liability doesn’t depend just on whether a sale was made…

“Whatever term we use to describe Amazon’s role, be it ‘retailer,’ ‘distributor,’ or merely ‘facilitator,’ it was pivotal in bringing the product here to the consumer,” the court wrote. Amazon should be liable if a product on its website is defective, the court added.

A technological approach to free speech: Build protocols, not platforms

by Mike Masnick (TechDirt via Knight First Amendment Institute at the Columbia University School of Journalism) 2019.08.21

[M]ost [populist] solutions are not just unworkable; many of them will make the initial problems worse or will have other effects that are equally pernicious.

This article proposes an entirely different approach — one that might seem counterintuitive but might actually provide for a workable plan that enables more free speech, while minimizing the impact of trolling, hateful speech, and large-scale disinformation efforts. As a bonus, it also might help the users of these platforms regain control of their privacy. And to top it all off, it could even provide an entirely new revenue stream for these platforms.

That approach: build protocols, not platforms…

Email used SMTP (Simple Mail Transfer Protocol). Chat was done over IRC (Internet Relay Chat). Usenet served as a distributed discussion system using NNTP (Network News Transfer Protocol). The World Wide Web itself was its own protocol: HyperText Transfer Protocol, or HTTP… the email space [has] open standards such as SMTP, POP3 and IMAP…

[W]hile there would be specific protocols for the various types of platforms we see today, there would then be many competing interface implementations of that protocol. The competition would come from those implementations…

[Currently,] both Type I (“false positive”) and Type II (“false negative”) errors are not only common; they are inevitable… A protocol-based system, however, moves much of the decision making away from the center and gives it to the ends of the network. Rather than relying on a single centralized platform, with all of the internal biases and incentives that that entails, anyone would be able to create their own set of rules — including which content do they not want to see and which content would they like to see promoted…

The marketplace of the many different filters and interfaces (and the ability to customize your own) would enable much greater granularity. Conspiracy theorists and trolls would have more trouble being found on the “mainstream” filters but would not be completely silenced from those who wish to hear them. Rather than today’s centralized system, where all voices are more or less equal (or completely banned), in a protocol-focused world the extremist views would simply be less likely to find mainstream appeal…

Rather than relying on a “marketplace of ideas” within an individual platform — which can be hijacked by those with malicious intent — protocols could lead to a marketplace of ideals, where competition occurs to provide better services that minimize the impact of those with malicious intent, without cutting off their ability to speak entirely.

US House of Representatives overwhelmingly approves new copyright bill

by The Verge 2019.10.22

[T]he House of Representatives overwhelmingly voted to approve a measure that would shake up the Copyright Office if it were made into law, creating a small claims court where online content creators can go after their infringers.

The Copyright Alternative in Small-Claims Enforcement Act, or the CASE Act for short [has] the goal of giving graphic artists, photographers, and other content creators a more efficient pathway toward receiving damages if their works are infringed. Under current law, all copyright suits must go through the federal courts, a system that is often costly and time-consuming for creators who decide to litigate their cases.

With the CASE Act, Congress is hoping to streamline the process for both parties. If the measure were to become law, the Copyright Office would house a tribunal of “Copyright Claims Officers” who would work with both parties involved in a lawsuit to resolve infringement claims. As outlined in the bill, damages would be capped at $15,000 for each infringed work and top out at $30,000 total…

The internet has made it easy for potential infringers to copy and paste creative works from artists, especially those whose businesses exist primarily online. However, internet advocacy and civil rights groups like the Electronic Frontier Foundation (EFF) and the American Civil Liberties Union have warned that a system like the one proposed by the CASE Act could cost the average internet user thousands for simply sharing a meme or lead to encroachments on their First Amendment rights.

Masnick’s Impossibility Theorem

by Mike Masnick (Techdirt) 2020.11.20

Masnick’s Impossibility Theorem [is] a sort of play on Arrow’s Impossibility Theorem. Content moderation at scale is impossible to do well. More specifically, it will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone…

First, the most obvious one: any moderation is likely to end up pissing off those who are moderated…

Second, moderation is, inherently, a subjective practice… many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly… you need to set rules, but rules leave little to no room for understanding context and applying it appropriately…

Third, people truly underestimate the impact that “scale” has on this equation. [If] there are 1 million decisions made every day, even with 99.9% “accuracy”… you’re still going to “miss” 1,000 calls. But 1 million is nothing [compared to social media scale]… And, even if you could achieve such high “accuracy”… a journalist [can easily] find a bunch of those mistakes — and point them out [giving any 1 mistake disproportionate attention compared to the 999 unsung successes.]

Building a More Honest Internet: Lessons for today’s social media from yesterday’s public radio

by Ethan Zuckerman (Columbia Journalism Review/MIT Media Lab) 2019.12.02

What would social media look like if it served the public interest? […]

In the US [around 1912], radio began as a free-market free-for-all… Then came 1926, and the launch of the National Broadcasting Corporation [NBC] by the Radio Corporation of America [RCA], followed in 1927 by the Columbia Broadcasting System [CBS]. These entities, each of which comprised a network of interlinked stations playing local and national content supported by local and national advertising, became dominant players. Noncommercial broadcasters were effectively squeezed out.

In the Soviet Union, meanwhile, ideology prevented the development of commercial broadcasting, and state-controlled radio quickly became widespread. Leaders of [Russia’s] new socialist republics recognized the power of broadcasting as [propaganda]…

The United Kingdom went a different route, eschewing the extremes of unfettered commercialism and central government control. In the UK’s model, a single public entity, the British Broadcasting Company, was licensed to broadcast content for the nation [funded by royalty revenue paid by hardware manufacturers as part of every radio set sold to consumers. The BBC] invented public service media. In 1926, when a national strike shut down the UK’s newspapers, the BBC, anxious to be seen as independent, earned credibility by giving airtime to both government and opposition leaders…

A new movement toward public service digital media may be what we need to counter the excesses and failures of today’s internet…

As in radio, the current model of the internet is not the inevitable one. Globally, we’ve seen at least two other possibilities emerge. One is in China, where the unfettered capitalism of the US internet is blended with tight state oversight and control. The result is utterly unlike sterile Soviet radio — conversations on WeChat or Weibo are political, lively, and passionate — but those have state-backed censorship and surveillance baked in. (Russia’s internet is a state-controlled capitalist system as well; platforms like LiveJournal and VKontakte are now owned by Putin-aligned oligarchs.)

The second alternative model is public service media. Wikipedia [and] Wikimedia’s model is made possible by millions of hours of donated labor provided by contributors, editors, and administrators… Wikipedia’s method of debating its way to consensus, allowing those with different perspectives to add and delete each other’s text until a “neutral point of view” is achieved, has proved surprisingly durable… one of the best definitions we currently have of consensus reality…

A public service Web invites us to imagine services that don’t exist now, because they are not commercially viable, but perhaps should exist for our benefit, for the benefit of citizens in a democracy. We’ve seen a wave of innovation around tools that entertain us and capture our attention for resale to advertisers, but much less innovation around tools that educate us and challenge us to broaden our [views]. Digital public service media would fill a black hole of misinformation with educational material and legitimate news…

Can we imagine a social network designed [to optimize for something other than engagement]: to encourage the sharing of mutual understanding rather than misinformation? […]

What’s preventing us from building such networks? The obvious criticisms are, one, that these networks wouldn’t be commercially viable, and, two, that they won’t be widely used. The first is almost certainly true, but this is precisely why public service models exist: to counter market failures. The second is more complicated. The [biggest obstacle] to launching new social networks in 2019 [is] Facebook [for which a] mandate of interoperability could help… just as Web browsers allow us to interact with any website through the same architecture, interoperability would mean we could build social media browsers that put existing social networks, and new ones, in the same place.

The trouble with fake news laws and regulations

by Casey Newton (The Interface/The Verge) 2019.12.02

[S]ome countries are trying to make tech platforms legally beholden to police speech according to national laws. One of them is Singapore, where in October a new law went into effect with the stated purpose of fighting “fake news”…

[Singapore’s] government ministers wasted little time in enforcing that law, taking action twice in the past week. And if you had to guess, what type of social media post would spur them into action the fastest? Would it be a post that spread hate speech or promoted violence? Would it be a post that spread harmful misinformation, such as a false election date intended to mislead voters? Or would it be a post that criticized the government? If you guessed #3, then you’ve been paying attention to the arguments that every single critic of this law has made since it was first proposed…

In the United States, the First Amendment may offer some protections to average citizens who want to criticize their government online. Others won’t be as lucky. And as the FOSTA-SESTA debacle showed, even the United States is not immune to terrible consequences from noble-sounding speech regulation. As the debate over Section 230 rages on, that’s something we ought to keep in mind.

Facebook reveals more details about its Independent Oversight Board

by Casey Newton (The Interface/The Verge) 2019.12.18

[A] Supreme Court for content moderation. When it launches next year, the board will hear appeals from people whose posts might have been removed from Facebook in error, as well as making judgments on emerging policy disputes at Facebook’s request. And the big twist is that the board will be independent of Facebook — funded by it, but accountable only to itself…

Facebook set up an irrevocable trust to fund the board, and last week the company said it had agreed to fund it to the tune of $130 million…

The initial board will comprise about 20 members, and grow to 40 over time. Members will serve three-year terms…

Facebook [has developed a] “case management tool” — the hyper-niche software, to be used by perhaps 100 people at a time, that will route cases from Facebook to the board and its staff.

On the effectiveness of fact-checking and the right way to verify/fact check

by FiveThirtyEight 2020.06.03

Political scientists Ethan Porter and Thomas J. Wood conducted an exhaustive battery of surveys on fact-checking, across more than 10,000 participants and 13 studies that covered a range of political, economic and scientific topics. They found that 60 percent of respondents gave accurate answers when presented with a correction, while just 32 percent of respondents who were not given a correction expressed accurate beliefs. That’s pretty solid proof that fact-checking can work.

But Porter and Wood have found, alongside many other fact-checking researchers, some methods of fact-checking are more effective than others. Broadly speaking, the most effective fact checks have this in common:

• They are from highly credible sources (with extra credit for those that are also surprising , like Republicans contradicting other Republicans or Democrats contradicting other Democrats ).

• They offer a new frame for thinking about the issue (that is, they don’t simply dismiss a claim as “wrong” or “unsubstantiated”).

• They don’t directly challenge one’s worldview and identity.

• They happen early, before a false narrative gains traction.

So despite a few studies suggesting that fact checks may make misinformation more prevalent (most prominently a widely-cited paper from political scientists Brendan Nyhan and Jason Reifler in 2010, which popularized the concept of the “backfire effect”), the overwhelming majority of studies have found that fact checks do work — or at the very least, do no harm. Still, some pieces of misinformation are harder to fight than others. And this episode involving Trump has several qualities that may make Twitter’s “get the facts” approach not exactly effective…

[G]iven Trump’s notoriety, his misstatements may just be harder to combat. In one of Porter and Wood’s experiments, they took an op-ed by Trump and issued a correction on two versions of the piece: one (correctly) attributed to Trump and one attributed to Senate Majority Leader Mitch McConnell. The authors found that the fact-check of McConnell moved significantly more respondents toward the accurate position than did the fact check of Trump1

A Trump-supporting reader might take a closer look if told that Republican state officials in Idaho and Washington had complete confidence in the security of voting by mail, or that an exhaustive 17-month law enforcement inquiry into voter fraud in Florida, a state governed by fellow Republican Ron DeSantis, found no evidence of wrongdoing. This combination of surprise and credibility, in theory, would activate a closer look — the kind of attention required for mental updating…

Finally, even an effective fact check might not make the difference that policymakers are hoping for in political attitudes. While it’s possible for fact checks to shift beliefs, attitudes are much harder to change and much more resilient to fact checks… Fact-checking can help with updating and correcting prior knowledge, but breaking the hyper-partisanship that nurtures misinformation in the first place will require a whole lot more work…

Policing the Internet: As Bad of an Idea Today as It Was in 1996

by Chris Cox (Real Clear Politics) 2020.06.25

OpEd by US Congressman Chris Cox, who originally authored Section 230’s “Cox-Wyden Amendment”:

Meanwhile Section 230, originally introduced in the House as a freestanding bill, H.R. 1978, in June 1995, stands on its own, now as then. Its premise of imposing liability on criminals and tort-feasors for their own wrongful conduct, rather than shifting that liability to third parties, operates independently of (and indeed, in opposition to) Sen. Exon’s approach that would directly interfere with the essential functioning of the Internet.

It is also useful to imagine a world without Section 230. In this alternative world, websites and Internet platforms of all kinds would face enormous potential liability for hosting content created by others. They would have a powerful incentive to limit that exposure, which they could do in one of two ways. They could strictly limit user-generated content, or even eliminate it altogether; or they could adopt the “anything goes” model through which CompuServe originally escaped liability before Section 230 existed.

White paper: A study about the meaning and historical application of Section 230

by The Internet Association (hattip Daphne Keller/@daphnehk) 2020.09.02

Internet Association (IA) analyzed more than 500 decisions from the past two decades involving Section 230 in order to better understand, in practice, the variety of parties using the law, how the law is being used, and how courts apply it.

The importance of Section 230 is best demonstrated by the lesser-known cases that escape the headlines. These decisions show the law continues to perform as Congress intended, quietly protecting soccer parents from defamation claims, discussion boards for nurses and police from nuisance suits, and local newspapers from liability for comment trolls.

We found that judges are thoughtfully applying the law and — far from acting as a “blanket immunity” — Section 230 only served as the primary basis for a ruling in 42 percent of the decisions we reviewed. When courts are concerned platforms may have played a role in creating content, they require discovery before deciding whether to grant 230 immunity. Our review also revealed that in many decisions, the underlying claims where defendants asserted a Section 230 defense were dismissed for lacking merit.

Interview: Yochai Benkler discusses misinformation and social media

by Mathew Ingram / Columbia Journalism Review (CJR Galley) 2020.10.05

[W]e collected 55,000 stories published online between March and the end of August that mentioned… mail-in voter fraud… and plotted them on a timeline to identify when attention to the agenda of mail-in vote fraud increased. Here, we found that the major spikes in attention across open web stories, Facebook, and Twitter, were related to each other, and that what drove all the major attention peaks but one, and most of the smaller attention peaks, was some combination of Trump, his campaign or White House staff, the Republican National Committee (RNC), or other top Republican politicians. Social media played a secondary role, further circulating and amplifying the mass media performances of political and media elites, but the primary actors were political and media elites, using TV interviews, press briefings, and Twitter, but Twitter more as a way of issuing press releases than as a genuine social media campaign that depends on social media attention to get broad attention and exposure…

[T]here are strong feedback loops between social media and more traditional outlets. The main reason to focus on mainstream press and specifically TV is that survey evidence suggests that that’s the media that the most persuadable parts of the public are in terms of news consumption. Most the the scientific evidence suggests that attention to “fake news” is highly concentrated in a small part of the population, mostly older, Republican, and highly persuaded already of the position they consume. When we look at the survey evidence, about 18% say they get their news primarily from social media, while 30% get it from network TV (ABC, CBS, NBC) or local TV… they are the only significant part of the population that needs to be educated [and] may in fact be open to hearing the facts, rather than the propaganda.

White paper: The Antecedents of Bullshitting

by John Petrocelli/Wake Forest University (Journal of Experimental Social Psychology via ResearchGate) 2018

Although it appears to be a common social behavior, very little is known about the nature of bullshitting (i.e., communicating with little to no regard for evidence, established knowledge, or truth; Frankfurt, 1986) and the social conditions under which it is most likely to occur. The current investigation examines specific antecedents of bullshitting, particularly examining topic knowledge, evidence for or against an obligation to provide an opinion hypothesis, and an ease of passing bullshit hypothesis.

Experiment 1 suggests that bullshitting is augmented only when both the social expectations to have an opinion, and the cues to show concern for evidence, are weak.

Experiment 2 demonstrates that bullshitting can also be attenuated under conditions of social accountability.

Results are discussed in light of social perception, attitude change, and new directions aimed at reducing the unwanted effects of bullshitting.

White paper: Research suggests that YouTube’s algorithm doesn’t contribute to radicalization

by Penn State University Department of Political Science

via Wired:

[This study] challenges the popular school of thought that YouTube’s recommendation algorithm is the central factor responsible for radicalizing users and pushing them into a far-right rabbit hole. The authors say that thesis largely grew out of media reports, and hasn’t been rigorously analyzed. The best prior studies, they say, haven’t been able to prove that YouTube’s algorithm has any noticeable effect. “We think this theory is incomplete, and potentially misleading… And we think that it has rapidly gained a place in the center of the study of media and politics on YouTube because it implies an obvious policy solution — one which is flattering to the journalists and academics studying the phenomenon…”

Other researchers in the field agree, including those whose work has been cited by the press as evidence of the power of YouTube’s recommendation system. Manoel Ribeiro… says that his work was misinterpreted to fit the algorithmic radicalization narrative by so many outlets that he lost count.

Instead, the paper suggests that radicalization on YouTube stems from the same factors that persuade people to change their minds in real life — injecting new information — but at scale. The authors say the quantity and popularity of alternative (mostly right-wing) political media on YouTube is driven by both supply and demand. The supply has grown because YouTube appeals to right-wing content creators, with its low barrier to entry, easy way to make money, and reliance on video, which is easier to create and more impactful than text[:] “We believe that the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media ‘radicalizing’ an otherwise moderate audience… Rather, the audience already existed, but they were constrained” by limited supply…

via Casey Newton’s Platformer:

In total, researchers found that news represents just 11 percent of total videos viewed on YouTube. But within that category, consumption of far-right news tripled in three years, from 0.5 percent to 1.5 percent of all views on the platform. “These results indicate that the phenomenon of right-wing radicalization on YouTube, while small in relative terms, nonetheless affects more than 1.85 million Americans monthly, averaged over the four year period, and growing rapidly…”

You might look at that data and assume that YouTube is the cause of that growth [in radicalization]. And yet the researchers struggled to find a compelling link in the data. One, most “sessions” on YouTub — 80 percent — are just one video long. Two, for those whose sessions lasted longer, YouTube was less likely to recommend news content the more videos a person watched. “Longer sessions are increasingly devoted to non-news content… We see no evidence that far-right content is more likely to be consumed either toward the end of session or in longer sessions.”

Punitive laws are failing to curb misinformation in Africa

by Nieman Lab 2021.06.29

In a recent study, we examined the changes made to laws and regulations relating to the publication of “false information” in 11 sub-Saharan countries between 2016 and 2020. We also looked at how they correlate with misinformation, to understand the role they may play in reducing harm caused by misinformation. We found that while these laws have a chilling effect on political and media debate, they do not reduce misinformation harm. This matters as the laws curtail public debate, yet fail to curb the harmful effects of misinformation…

[I]n the 11 countries studied, the number of laws against “false information” almost doubled — from 17 to 31… The problem we identified is that these laws restrict freedom of speech. And they don’t reduce the actual — or potential harm — that misinformation causes. The punitive approach does not appear to work. By contrast, an approach favoring better access to accurate information and correction of false information may do so.

Explainer: Zero Trust networking and computing

by Muji (HHHHypergrowth) 2021.06.09

A Coasean analysis of offensive speech

by Jamie Whyte (Truth on the Markets/TOTM) 2021.09.13

Coasean logic [often] supports the woke censors! But, again, it’s not that simple — for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people [called “offense masochists”] take masochistic pleasure in being offended… [H]ow could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms — the venues of offensive speech that they seek to regulate — are privately owned… That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances — laws against public nuisance, harassment, public indecency, etc. — are generally efficient. The cost they impose on the offenders is less than the benefits to the offended. But they are unnecessary when the giving and taking of offense occur within a privately owned place…

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

On the limits of cryptoeconomics, neolibralism, and financialization in both politics and governance

by Vitalik Buterin (hattip Nathan Schneider) 2021.09.26

[U]se this framework to understand the pitfalls of “finance”. Finance can be viewed as a set of patterns that naturally emerge in many kinds of systems that do not attempt to prevent collusion. Any system which claims to be non-finance, but does not actually make an effort to prevent collusion, will eventually acquire the characteristics of finance, if not something worse. To see why this is the case, compare two point systems we are all familiar with: money, and Twitter likes. Both kinds of points are valuable for extrinsic reasons, both have inevitably become status symbols, and both are number games where people spend a lot of time optimizing to try to get a higher score. And yet, they behave very differently. So what’s the fundamental difference between the two?

The answer is simple: it’s the lack of an efficient market to enable agreements like “I like your tweet if you like mine” market, or “I like your tweet if you pay me in some other currency”. If such a market existed and was easy to use, Twitter would collapse completely (something like hyperinflation would happen, with the likely outcome that everyone would run automated bots that like every tweet to claim rewards), and even the likes-for-money markets that exist illicitly today are a big problem for Twitter. With money, however, “I send X to you if you send Y to me” is not an attack vector, it’s just a boring old currency exchange transaction. A Twitter clone that does not prevent like-for-like markets would “hyperinflate” into everyone liking everything, and if that Twitter clone tried to stop the hyperinflation by limiting the number of likes each user can make, the likes would behave like a currency, and the end result would behave the same as if Twitter just added a tipping feature.

So what’s the problem with finance? Well, if finance is optimized and structured collusion, then we can look for places where finance causes problems by using our existing economic tools to understand which mechanisms break if you introduce collusion! Unfortunately, governance by voting is a central example of this category; I’ve covered why in the “moving beyond coin voting governance” post and many other occasions. Even worse, cooperative game theory suggests that there might be no possible way to make a fully collusion-resistant governance mechanism.

And so we get the fundamental conundrum: the cypherpunk spirit is fundamentally about making maximally immutable systems that work with as little information as possible about who is participating (“on the internet, nobody knows you’re a dog”), but making new forms of governance requires the system to have richer information about its participants and ability to dynamically respond to attacks in order to remain stable in the face of actors with unforeseen incentives. Failure to do this means that everything looks like finance, which means, well…. perennial over-representation of concentrated interests, and all the problems that come as a result…

[T]he key difference between financialized Kleros courts and non-financialized regular courts is that financialized Kleros courts are, well… financialized. They make no effort to explicitly prevent collusion. Non-financialized courts, on the other hand, do prevent collusion in two key ways:

• Bribing a judge to vote in a particular way is explicitly illegal

• The judge position itself is non-fungible. It gets awarded to specific carefully-selected individuals, and they cannot simply go and sell or reallocate their entire judging rights and salary to someone else.

The only reason why political and legal systems work is that a lot of hard thinking and work has gone on behind the scenes to insulate the decision-makers from extrinsic incentives, and punish them explicitly if they are discovered to be accepting incentives from the outside. The lack of extrinsic motivation allows the intrinsic motivation to shine through. Furthermore, the lack of transferability allows governance power to be given to specific actors whose intrinsic motivations we trust, avoiding governance power always flowing to “the highest bidder”. But in the case of Kleros, the lack of hostile extrinsic motivation cannot be guaranteed, and transferability is unavoidable, and so overpoweringly strong in-mechanism extrinsic motivation (the conformity incentive) was the best solution they could find to deal with the problem.

And of course, the “final backstop” that Kleros relies on, the right of users to fork away, itself depends on social coordination to take place — a messy and difficult institution, often derided by cryptoeconomic purists as “proof of social media”, that works precisely because public discussion has lots of informal collusion detection and prevention all over the place…

[P]rediction markets to scale up content moderation[:] Instead of doing content moderation by running a low-quality AI algorithm on all content, with lots of false positives, there could be an open mini prediction market on each post, and if the volume got high enough a high-quality committee could step in an adjudicate, and the prediction market participants would be penalized or rewarded based on whether or not they had correctly predicted the outcome. In the mean time, posts with prediction market scores predicting that the post would be removed would not be shown to users who did not explicitly opt-in to participate in the prediction game…

[Nathan Schneider:] “pairing cryptoeconomics with political systems can help overcome the limitations that bedevil cryptoeconomic governance alone… If cryptoeconomics needs a political layer, and is no longer self-sufficient, what good is cryptoeconomics? One answer might be that cryptoeconomics can be the basis for securing more democratic and values-centered governance, where incentives can reduce reliance on military or police power. Through mature designs that integrate with less-economic purposes, cryptoeconomics might transcend its initial limitations. Politics needs cryptoeconomics, too… by integrating cryptoeconomics with democracy, both legacies seem poised to benefit.”

Research: crowdsourced fact-checking can improve social media moderation by channeling partisan motivations to be productive if opposing sides police each other

by David Rand (Financial Times) 2022.10.16

How can social media companies thread the needle of engaging in meaningful moderation while escaping accusations of partisan bias and censorship? One potential solution that platforms have begun to test is to democratise moderation through crowdsourced fact-checking. Instead of relying solely on professional fact-checkers and artificial intelligence algorithms, they are turning to their users to help pick up the slack.

This strategy has many upsides. First, using laypeople to fact-check content is scalable in a way that professional fact-checking — which relies on a small group of highly trained experts — is not. Second, it is cost-effective, especially if users are willing to flag inaccurate content without getting paid. Finally, because moderation is done by members of the community, companies can avoid accusations of top-down bias in their moderation decisions…

Our research has found that averaging the judgments of small, politically balanced crowds of laypeople matches the accuracy as assessed by experts, to the same extent as the experts match each other…

However, results are more mixed when users can fact-check whatever content they choose. In early 2021, Twitter released a crowdsourced moderation programme called Birdwatch, in which regular users can flag tweets as misleading, and write free-response fact-checks that “add context”. Other members of the Birdwatch community can upvote or downvote these notes, to provide feedback about their quality. After aggregating the votes, Twitter highlights the most “helpful” notes and shows them to other Birdwatch users and beyond… a new study by our team found that political partisanship is a major driver of users’ engagement on Birdwatch. They overwhelmingly flag and fact-check tweets written by people with opposing political views. They primarily upvote fact-checks written by their co-partisans, and grade those of counter-partisans as unhelpful… Here’s the pleasant surprise, though: although our research suggests politics is a major motivator, most tweets Birdwatchers flagged were indeed problematic. Professional fact-checkers judged 86% of flagged tweets misleading, suggesting the partisan motivations driving people to participate are not causing them to indiscriminately flag counter-partisan content. Instead, they are mostly seeking out misleading posts from across the aisle. The two sides are somewhat effectively policing each other…

First, crowdsourced fact-checking can be a potentially powerful part of the solution to moderation problems on social media if deployed correctly. Second, channelling partisan motivations to be productive, rather than destructive, is critical for platforms that want to use crowdsourced moderation. Partisanship appears to motivate people to volunteer for fact-checking programmes, which is crucial for their success. Rather than seeking to recruit only the rare few who are impartial, the key is to screen out the small fraction of zealots who put partisanship above truth. Finally, more sophisticated strategies are needed to identify fact-checks that are helpful.

See also…

Twitter’s Birdwatch Community Notes open source algorithm and implementation

by Vitalik Buterin (Ethereum) 2023.08.15

Community Notes… might be the closest thing to an instantiation of “crypto values” that we have seen in the mainstream world. Community Notes are not written or curated by some centrally selected set of experts; rather, they can be written and voted on by anyone, and which notes are shown or not shown is decided entirely by an open source algorithm. The Twitter site has a detailed and extensive guide describing how the algorithm works, and you can download the data containing which notes and votes have been published, run the algorithm locally, and verify that the output matches what is visible on the Twitter site. It’s not perfect, but it’s surprisingly close to satisfying the ideal of credible neutrality

Of course, there is a totally different way in which [any] vote could have been manipulated: brigading. Someone who sees a note that they disapprove of could call upon a highly engaged community (or worse, a mass of fake accounts) to [downvote it], and it may not require that many votes to drop the note from being seen as "helpful" to being seen as "polarized". Properly minimizing the vulnerability of this algorithm to such coordinated attacks [could require an algorithm to] randomly allocate notes to raters…

[T]he Community Notes rating algorithm explicitly attempts to prioritize notes that receive positive ratings from people across a diverse range of perspectives. That is, if people who usually disagree on how they rate notes end up agreeing on a particular note, that note is scored especially highly… The goal of the algorithm is to create a four-column model of users and notes, assigning each user two stats that we can call “friendliness” and “polarity”, and each note two stats that we can call “helpfulness” and “polarity”. [Polarity is assigned to both users and notes.] The model is trying to predict the matrix as a function of these values, using the following formula:

Thus, selecting for helpfulness identifies notes that get cross-tribal approval, and selects against notes that get cheering from one tribe at the expense of disgust from the other tribe...

If many users (especially users with a similar polarity to the note) rate a note “Not Helpful”, and furthermore they specify the same “tag” (e.g. “Argumentative or biased language”, “Sources do not support note”) as the reason for their rating, the helpfulness threshold required for the note to be published increases from 0.4 to 0.5…

If a note is accepted, the threshold that its helpfulness must drop below to de-accept it is 0.01 points lower than the threshold that a note’s helpfulness needed to reach for the note to be originally accepted.

Previously…

Thread: Twitter Birdwatch Community Notes

by Anthony Bardaro (@anthpb via Twitter) 2022.11.04

Passive news consumption is rising

by Nieman Lab 2023.06.26

In this year’s Reuters Institute Digital News Report, we unpack trends over time in global news participation to better understand whether having more means of digital participation [a la social media] has translated to greater actual participation among the public. Instead, across many markets, we find steady falls over time in open and active sharing alongside rises in passive consumption.

[W]e break news users into three groups:
• active participators, who post and comment about news;
• reactive participators, who read, like or share news stories; and
• passive consumers, who use news but do not participate with it

On average across 46 markets, we now find that less than a quarter of respondents (22%) actively participate in news — down a striking -11 percentage points since 2018. Meanwhile, growing numbers of news users participate reactively (31%, +6pp since 2018), and nearly half now do not participate at all (47%, +5pp)…

These trends remain consistent in the US, for instance, where the proportion of active participators (24%) is down 11 percentage points since 2016 and passive consumers now make up a majority (51%) of news users…

Across the board, we find steady decreases over time for most forms of online and offline participation…

However, even amid steady declines in other sharing and commenting behaviors, we find that one form of news participation has grown over time: Sharing via private messaging apps (up from 17% in 2018 to 22% in 2023). This is particularly pronounced in markets in regions with higher overall use of private messaging apps, such as Latin America, Southeast Asia, and Southern Europe — but it also maps onto broader uptake across all markets of platforms like WhatsApp (+9 points since 2018) or Telegram (+12 points).

Study: Tweaking the social media reward structure dramatically improves information quality control/assurance

by NiemanLab (Ian Anderson/Gizem Ceylan/Wendy Wood) 2023.08.08

Our research, presented at the 2023 Nobel Prize Summit, shows that social media actually has the ability to create user habits to share high-quality content. After a few tweaks to the reward structure of social media platforms, users begin to share information that is accurate and fact-based…

To investigate the effect of a new reward structure, we gave financial rewards to some users for sharing accurate content and not sharing misinformation. These financial rewards simulated the positive social feedback, such as likes, that users typically receive when they share content on platforms. In essence, we created a new reward structure based on accuracy instead of attention.

As on popular social media platforms, participants in our research learned what got rewarded by sharing information and observing the outcome, without being explicitly informed of the rewards beforehand. This means that the intervention did not change the users’ goals, just their online experiences. After the change in reward structure, participants shared significantly more content that was accurate. More remarkably, users continued to share accurate content even after we removed rewards for accuracy in a subsequent round of testing. These results show that users can be given [extrinsic] incentives to share accurate information as a matter of [intrinsically-reinforcing] habit.

A different group of users received rewards for sharing misinformation and for not sharing accurate content. Surprisingly, their sharing most resembled that of users who shared news as they normally would, without any financial reward. The striking similarity between these groups reveals that social media platforms encourage users to share attention-getting content that engages others at the expense of accuracy and safety…

In practice, social media companies might be concerned that changing user habits could reduce users’ engagement with their platforms. However, our experiments demonstrate that modifying users’ rewards does not reduce overall sharing. Thus, social media companies can build habits to share accurate content without compromising their user base…

An accuracy-based reward structure could help restore waning user confidence. Our approach, using the existing rewards on social media to create incentives for accuracy, tackles misinformation spread without significantly disrupting the sites’ business model. This has the additional advantage of altering rewards instead of introducing content restrictions, which are often controversial and costly in financial and human terms.

Implementing our proposed reward system for news sharing carries minimal costs and can be easily integrated into existing platforms. The key idea is to provide users with rewards in the form of social recognition when they share accurate news content. This can be achieved by introducing response buttons to indicate trust and accuracy. By incorporating social recognition for accurate content, algorithms that amplify popular content can leverage crowdsourcing to identify and amplify truthful information.

👆Check out the easiest way for you to get informed or inform others 👆

--

--

Anthony Bardaro
Annotote TLDR

“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away...” 👉 http://annotote.launchrock.com #NIA #DYODD