The American Social Credit System

Its goal transcends the political: enthroning tech elites as the gatekeepers and arbiters of information

Bedivere Bedrydant
America First
15 min readDec 14, 2020

--

Both China and Google want to annihilate anonymity and map internet networks in order to assess individual reliability.

How does Google work? For a lot of people, that’s like asking how the internet works — after all, Google basically is the internet, at least for many users. They type their query into the address bar or Google’s search page, and they assume they’re getting the “correct” result up at the top of the search engine results page (SERP) or barring that, somewhere on the first page.

This is not just an academic question. Thousands of marketers and advertisers spend their waking hours thinking about it. And it is a live worry for small businesses for whom a couple bad reviews (legitimate or not) can stain their reputation. It is a serious concern for anyone who is applying for a job. After all, every employer Googles potential new hires. Will that search turn up your dirt? Will it be accurate? A Google search can change a life.

In reality, Google is not “the internet.” It is merely a map of the internet. And like all maps, in order to be useful, it has to be condensed and abridged; certain things get left out while others are enlarged or spelled out. That is simply the price to pay when making a map.

So it is a testament to the effectiveness of the Google search algorithm that so many people do think that “Google = Internet.” Imagine someone opening up an atlas and then going outside to try and find a line of longitude on the ground — they think the atlas is the earth (I have a faint memory of trying to do this in elementary school). So it is with the naïve Google user who imagines they’re seeing the actual internet when they look at Google SERPs.

The Origins of Internet Search

But this was not inevitable. The first search engines were bad — they measured a web page’s relevance to a given search based primarily upon, among other things, “keyword density.” So imagine, back in the late ’90s, you wanted to cook butternut squash soup. So you connected your computer to the internet using dial-up and typed “butternut squash soup recipe” into the search bar. You can imagine, given the incentives of such a system, what kinds of pages might pop up.

After all, if the frequency of the phrase “butternut squash soup recipe” is the measure of relevance, according to this primitive search engine, you might not actually get many recipes on the SERP. How many recipes actually use the phrase “butternut squash soup recipe”? Maybe that’s the title of the page. But you wouldn’t repeat that phrase in step one, step two, or any of the steps at all! An internet prankster, however, could write a page that just repeats the phrase “butternut squash soup recipe” one hundred times with no other text — and no actual recipe to speak of — and our primitive search engine would judge that to be the most relevant page.

On January 9, 1998 however, Lawrence “Larry” Page filed a patent for what is now known as “PageRank” — a new search engine algorithm that ranks web pages based not on keyword density, but on the number and quality of “citations,” that is hyperlinks (AKA backlinks), that that page has. As if every backlink is a letter of recommendation for that page, and the pages with more and higher-quality “letters of recommendation” win. Google was born, and with it, a civilization-level mission to refashion our access to information.

Google, however, is more than the PageRank algorithm. To speak of “the Google algorithm” is really to speak of many algorithms. PageRank is Google’s most well-known algorithm, but it is certainly not the only one.

Just over a year and a half after the PageRank algorithm patent was filed, on October 15, 1999, an Indian research scientist, Krishna Bharat, and a Romanian computer scientist, George Mihaila, filed their own search algorithm patent. Rather than attempting to map out the entire internet and measure the backlink quality of every page, Bharat and Mihaila’s algorithm narrowed the internet down and identified “expert documents” which were then assessed for the quality of their backlink network. This “expert” oriented algorithm would later be named “Hilltop.”

If PageRank maps out backlink networks to measure page quality in a way similar to reviewing letters of recommendation, Hilltop looks at the resumes of your recommenders to see if they have any right to be recommending you for the job in question. If it seems to you like these two algorithms complement each other, Google would agree. They bought Hilltop in 2003.

Ranking Internet Authors

Clearly, identifying and rewarding expertise was one of the central missions of Google’s algorithm development, from the very beginning. But acquiring Hilltop was only the beginning. In 2005, Google filed a patent for “Agent Rank,” a method of identifying and scoring the expertise of the author of a given web page. But just because Google files a patent for something does not mean it is used in its algorithm. Something very interesting happened, however, around the same time that the Agent Rank patent was filed: Google started rolling out social networks.

They were each a bust: Orkut in 2004, Dodgeball (launched in 2004 and acquired by Google in 2005), Jaiku (launched in 2006 and acquired by Google in 2007), Wave in 2009, Buzz in 2010, and Google+ in 2011.

The reasons to build or acquire social networks are, on the one hand, obvious: Google was looking for a “Facebook-killer.” But there was more to it than that, as the history of Google+, and its relationship to Google’s bread-and-butter business — internet search — attest.

Google+ took a lot of heat for enforcing its “real name” policy even more strictly than Facebook did. Here’s an excerpt from a Reuters article from 2011, back when Google+ was launched:

Google senior vice-president Vic Gundotra, the man who is in charge of Google’s social efforts, said in response to a post by blogger Robert Scoble that Google doesn’t necessarily want to force people to use only their legally given names — he says the web company is fine with users setting up accounts under “commonly used” names, although it’s not clear how this is defined. This would presumably cover celebrity users like 50 Cent or Lady Gaga (Gundotra noted that even he doesn’t use his legal name on Google+). The Google executive said he simply wants to maintain a “positive tone” on the network, and compared it to requiring people to wear shirts in a restaurant.

The motivation for the social network’s real name policy was not solely internal to the Google+ user experience, however. It was part of Google’s vision for internet search, and for the internet generally, as Eric Schmidt’s words and the Google search algorithm proved shortly after Google+ was launched.

In 2013, then-Executive Chairman of Google, Eric Schmidt, published his co-written book The New Digital Age. In it, he wrote:

Within search results, information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance.

As Danny Sullivan (then a journalist and now a Google employee) wrote at the time in Search Engine Land, Schmidt was not describing the current Google algorithm.

“I’ve read and heard people cite this as proof Google is already doing some type of ‘Author Rank,’” Sullivan said.

“Not in that fashion.”

But, if it were to happen, what would such an ‘Author Rank’ look like — how would it be compiled? Schmidt elaborated in his book:

Some governments will consider it too risky to have thousands of anonymous, untraceable and unverified citizens — “hidden people”; they’ll want to know who is associated with each online account, and will require verification, at a state level, in order to exert control over the virtual world.

Your online identity in the future is unlikely to be a simple Facebook page; instead it will be a constellation of profiles, from every online activity, that will be verified and perhaps even regulated by the government.

Imagine all of your accounts — Facebook, Twitter, Skype, Google+, Netflix, New York Times subscription — linked to an “official profile.”

Schmidt pins the need for this comprehensive social profile on the desire of states to “exert control” over the internet. But it is clear that the extermination of internet anonymity is not merely the goal of authoritarians; this is Schmidt’s own goal, couched, tellingly, in the mouths of the hypothetical dictators of “some countries.”

How do we know? First of all, because of the author-identity tie-ins between Google+ and the Google search algorithm. But secondly, and more importantly, because Google rolled out Author Rank fewer than 12 months after Schmidt’s book came out, as Google employee Matt Cutts publicly confirmed in March 2014. Author Rank is a numerical score that article authors receive that ranks their reliability on a scale from 1–10. Clearly, the quest to identify individuals’ reliability using machine learning was on. The leak of Google’s Quality Rater Guidelines a few months later, in July 2014, confirmed this in a big way.

“Quality Rater Guidelines” (QRGs) refers to the internal document, periodically revised and re-released, that Google issues to its “Quality Raters.” These are people who comb through websites and assign them quality scores. These Quality Raters don’t make the algorithm; the scores they give sites do not directly affect how a site appears in Google searches. The Quality Raters, rather, are the human “intelligences” that inform the artificial intelligence of the Google algorithm. Google engineers compare the performance of the search algorithm to the ratings that the Quality Raters give to different websites: If for a given search the algorithm returns web pages that the Quality Raters have given low scores while the algorithm neglects pages with high scores, the engineers will know the algorithm needs to be tweaked.

How do the Quality Raters determine what makes a site high or low quality? The QRGs spell it out, in detail (the most recent version of the QRGs is 175 pages long). You can understand, then, how important of a document it is. It is a qualitative description of what the Google search algorithm is looking for.

The July 2014 leak showed that only a month after Matt Cutts confirmed that Google was using Author Rank in its algorithm, Google also completely rewrote its QRGs from the ground-up (rather than, per their usual, merely revising them). This rewrite, in keeping with the intensification of Google’s quest to systematically rank individual expertise and reliability, introduced a new concept, “E-A-T,” (which stands for “Expertise, Authority, Trust”) that has remained a cornerstone of their search algorithm ever since.

As Jennifer Slegg summarized it back when the new 2014 QRGs were leaked:

Google’s brand new emphasis in the new Quality Rater’s Handbook is the idea of E-A-T, which is a website’s “expertise, authoritativeness and trustworthiness”.

Likewise, Google is stressing that sites that lack expertise, authoritativeness and trustworthiness should be awarded the Low rating when a page or site is being assigned a rating by one of their quality raters. And more importantly, Google says that lacking a certain amount of E-A-T is enough of a reason for a rater to give any page a low quality rating.

The Meaning of Internet Expertise

Why does this matter? What does it have to do with anything besides improving the quality of your internet search results?

It matters because this is not just a consumer-oriented move.

Shortly after Author Rank was confirmed and the E-A-T guidelines written, in October 2014, Richard Gingras, then the Senior Director for News and Social Products at Google, teamed up with journalist Sally Lehrman to announce a new initiative, The Trust Project. Citing widespread dissatisfaction with the news media, Gingras and Lehrman asked some questions:

How does a reader decide whether one news article, image or video is more trustworthy than another? How does she determine the motivation of the publisher, the expertise of the journalist, the degree of vetting that preceded publication?

It’s time to consider new approaches. Can serious news outlets find ways to establish trust beyond relying on the reader to divine their reputation?

Readers are not reliable. It is time to let the bots decide.

The Trust Project was only one among many Google initiatives to defeat “disinformation.” Project Owl, launched in 2017, was another. In February 2019, Google released a “Fighting Disinformation” white paper at the Munich Security Conference. Google has a mission that transcends consumer tech. It transcends politics or tech policy. It is a civilizational mission that transcends the political: to enthrone tech elites as the gatekeepers and arbiters of information and access to it.

The coronavirus pandemic has shown us what perspicacious observers already knew: The tech companies wish to move all of human life into virtual life. Coronavirus is not the cause of this, only the acceleration and the unveiling of it. Whether it is friendship, commerce, sex, religion, politics, medicine, therapy, food or alcohol consumption, it does not matter — the story is the same. Like sheep being driven into a fold, the various aspects of our lives are being driven out of the public square into the private fold of our home lives.

And yet at the same time, our private lives are, like backpacks full of contraband, being turned inside out: Everything hidden inside the home is exposed to the virtual gaze of internet search, digital marketers, app developers, and website analytics. In shepherding our lives into our homes, the tech companies have abolished the public-private distinction simply by abolishing the public square and making everything private. Simultaneously, however, everything private has been made virtual, and its privacy exposed and thus destroyed. Thus, tech elites will have a controlling hand in every aspect of our lives, since our lives have been “virtualized.” Our own relationships to friendship, commerce, sex, religion, medicine et al. are mediated by information; moderate the information, and you moderate life itself.

This is not new. The technological form that it take is, but elites have wanted to turn the private lives of citizens inside out for awhile, at least as far back as the Soviet cult of Pavel Morozov, the “martyr” boy who informed on his own father to the Soviet authorities, and in retaliation was killed by his family. The Soviet regime lionized him and encouraged other children to follow his example and inform on their parents for infractions against communism.

Success in this techno-Morozovian mission requires, however, that the tech companies know our names when we log on. As Eric Schmidt said, no one can be anonymous. When he said it, he pretended to be speaking for fearsome governments; he was actually speaking for himself.

Social Credit With Chinese Characteristics

While he was being disingenuous, Schmidt was not wrong. Authoritarian governments do not like internet anonymity, though they are attracted to the internet’s unprecedented power to abolish the problematic public-private distinction, and the benefits both of those spheres give to citizens to resist state power.

In the public square, citizens can organize themselves in various associations and generate power that opposes or subverts that of the state and other elite institutions. In the private sphere, citizens can retire from the gaze of institutions, state or otherwise, and do the things and think the thoughts that are otherwise verboten. As we have seen, the internet offers elites an awesome opportunity: destroy the public square by moving all of life into the private sphere, and then turn the private sphere inside out and expose it to the virtual gaze.

China has understood this for a long time. How long, exactly?

Almost as long as Google has existed.

In 1999, just as Larry Page and Sergey Brin were moving out of the garage and into a real office, the same year that the Hilltop patent was filed, Lin Junyue published “The National Credit Management System,” the founding document of China’s Social Credit System (SCS).

Market research firm Trivium China summarizes Lin’s work this way:

He suggested a data-driven platform which would collect financial and behavioral data on companies and individuals, and which would be underpinned by a rewards and punishments mechanism to enforce accountability.

Though Lin had a lot to say about credit as it relates to individuals, he wasn’t so much focused on moral and civic “uprightness” as he was on honesty as the underpinning of a healthy market environment. Interestingly enough, the SCS as it exists today is almost identical to the one Lin proposed almost 20 years ago.

The idea, of course, long preceded the execution. Real action on a Chinese social credit system did not begin until 2014. Interestingly, and certainly coincidentally, that was the same year that Google started to make good on Schmidt’s fantasy of annihilating internet anonymity with the addition of Author Rank to their search algorithm, the formulation of E-A-T in the rewritten QRGs, and the unveiling of The Trust Project.

That same year, in China, the “State Council Notice concerning Issuance of the Planning Outline for the Establishment of a Social Credit System” was published.

If Lin’s paper was the philosophical blueprint for the SCS, the State Council Notice was the policy blueprint. It laid out, across a variety of spheres, how China would proceed to build it.

Its basic structure, as proposed, was to build a master database of individuals and businesses, create “blacklists” of “irresponsible” individuals who would be deprived of privileges ranging from the political (e.g. banned from travel) to the financial (e.g. cannot get a loan) to the personal (e.g. publicly shamed). It would also create “redlists” (Red = Good) for favorable treatment for “responsible” individuals.

As for the internet, China had the same goals as Eric Schmidt: annihilate anonymity and use real names and algorithmic mapping of internet networks in order to assess individual reliability.

Here’s an image taken from a “map” of the 2014 State Council Notice put together by Jeremy Daum:

Here’s the actual text of the State Council Notice that lays out the goals of the SCS vis-a-vis the internet which the chart above maps (emphasis mine):

Establishing credit in the field of Internet use and services. Forcefully advance the establishment of online creditworthiness, foster the ideas of handling the Internet in accordance with law and creditworthy use of the Internet, gradually implementing the online real-name system, improving legal safeguards for the establishment of online credit, and forcefully advancing the establishment of online credit supervision and management mechanisms. Establish online credit evaluation systems, conduct credit assessment of internet enterprises’ service operations behavior and the online conduct of people online, and record credit levels. Establish network credit files that cover Internet enterprises and online individuals, actively advance the establishment of mechanisms for exchanging and sharing relevant network credit information and with other areas of society, and forcefully promoting the widespread use of network credit information in all areas of society. Establish systems for network credit blacklists, include enterprises and individuals that engage in online fraud, rumor-mongering, infringement of other persons’ lawful rights and interests, and other seriously untrustworthy network conduct in black lists; and employ measures such as restricting online conduct and barring access to industries against those entered on the black lists, and report them to relevant departments for disclosure and exposure.

Interesting, isn’t it, how similar the State Council Notice sounds to PageRank, Hilltop, Author Rank, E-A-T, and the Trust Project?

Social Credit With American Characteristics

There are simply too many parallels between China’s Social Credit System and Google to ignore.

Most people probably imagine social credit simply as assigning each citizen a single numerical score, like an Experian credit score. Such numerical ranking does exist in certain places in China and within certain sectors of commercial life, but a single number is not really the goal or the point of the Chinese SCS. The real structure of the SCS consists in the blacklists and redlists, as discussed above.

It is not part of Google’s mission to assign each American a number that will alone determine their access and privileges to the various services of everyday life. But neither does China’s system do that. Google does have numerical ranking, however, for internet authors (i.e. Author Rank). Thus, your ability to speak to your fellow citizens on digital platforms is determined — at least in part — by Google’s secretive ranking of your personal reliability.

Google also censors content and uses the equivalent of blacklists. Dissidents on the Right, like the Claremont Institute, as well as dissidents on the Left, like The Bellows, have been prevented from using Google Ads to promote otherwise innocuous content.

YouTube, a Google subsidiary, made a big announcement only a few days ago that it was henceforth banning any questioning of the integrity of the 2020 election. And conservatives have long complained of the ideological bias of the Google search results.

But complaining about Google’s “bias” is missing the point. When Google punishes dissidents, that is not an example of bad actors or a “bug” in the system. Rewarding reliability and punishing the wayward — as determined by the Google algorithm — is the whole point of the enterprise, and always has been. So if dissidents on the right or the left say something “unreliable,” subsequent Google censorship is not a flaw; it is an example of the algorithm working as designed.

As the public square is eroded and pushed into the private sphere, and as your private life is transformed into a virtual life, expect Google’s reach to extend further into your life and their doling out of rewards and punishments to intensify (can you say “vaccine card”?). It has already been happening to you, unawares, whether it is employers searching your name, consumers looking at your restaurant’s Yelp reviews, your favorite writers being promoted or “shadowbanned” based on their algorithmic “reliability,” or any of the countless other ways the bots have handed over information access decision-making to tech elites. If this sounds dystopian, anti-democratic, and un-American, well, you are right.

Google is, after all, America’s very own Social Credit System.

--

--

Bedivere Bedrydant
America First

Sir Bedivere is a technology executive in the Western United States.