The internet is missing the point, as usual

Michael Marinaccio
People Over Product
4 min readJun 11, 2016

--

Background: SourceFed came out with a video (below), blah blah blah insinuating Google is manipulating search for Hillary. Robert Epstein has only written about this concern over and over. But we’re missing Epstein’s point, as usual. It’s not whether Google is manipulating anything, but more concerningly that they are not.

Here is an excerpt from a recent piece I wrote outlining my core worry behind Google search results:

To suspect that Facebook or Google would seek to censor or prioritize certain lines of thinking misunderstands their core philosophy. In reality, they don’t have a stake in either side. Their philosophy does not aim to improve, change, or manipulate human thinking.

Their philosophy aims to replace human thinking — and charge you for it.

Rhea Drysdale is correct in her assertion that SourceFed knows little about the complexities of Google Search, Autocomplete, and the AI (let’s call them what they are: rules) that are currently powering search customization. But the fact remains that there is still a nasty caveat in Google’s statement:

Our Autocomplete algorithm will not show a predicted query that is offensive or disparaging when displayed in conjunction with a person’s name. More generally, our autocomplete predictions are produced based on a number of factors including the popularity of search terms.

A number of factors.” To my point, and perhaps SourceFed’s point, the question is not “are people at Google consciously manipulating search?” — because anyone who understands Google or the work they do knows they are not. But rather “what the shit is going on with this particular search? and search in general?

http://techaeris.com/2016/06/11/infographics-map-google-controls-opinion/

And I believe that question deserves an answer, or at least some level of transparency. If machine learning exists, and it is tailoring itself to our individual textual desires — is there an unconscious, unbiased, unintended lift on certain kinds of persuasion, as a result of ordinal rearranging?

More and more research, both from Epstein and others, say yes. They show that unconditional trust in complex mechanisms like GPS and Google Search are causing persuadability to skyrocket. Literally sending people flying off bridges and narrowing the information they ingest to only two results, as Epstein argues:

[Google] is so good, in fact, that about 50% of our clicks go to the top two items, and more than 90% of our clicks go to the 10 items listed on the first page of results; few people look at other results pages, even though they often number in the thousands, which means they probably contain lots of good information.

http://techaeris.com/2016/06/11/infographics-map-google-controls-opinion/

Let me reiterate: I am not here to blame Google for manipulation. The previous author is exactly correct: that concept is both silly and absurd.

My chief concern is WHAT search (and autocomplete) is doing on a macro level and how the granular rules that govern search results affect major national events, such as presidential elections.

In a society where Google has become synonymous with knowledge (when it should only be synonymous with information), it is high time we received some bold transparency on:

1. What are all the rules for search?

2. If search tailoring is causing shifts in persuasion, how dramatic is the effect in a macro sense?

Ironically, you can Google a lot of this information, as it has been openly discussed in many places. But until there is a central, transparent place for individuals to openly access search experiments, version logs, and case studies, we will not know the extent of the unintended effects this massive information engine is generating. If watershed outcomes are being changed without our knowledge, we deserve to know how.

If you like what you read be sure to ♥ it below. Stay in touch by subscribing to my newsletter or following me on Twitter.

--

--