Using information about people to manipulate them is nothing new. Psychologists refer to theory of mind as the capacity to attribute various mental states to others. Once we begin developing a theory of mind, using information we’ve acquired on others to our advantage follows very quickly. While this isn’t a uniquely human ability, our capacity for language and large brains have enabled us to take far greater advantage of it than any other species on the planet.
One indicator of a theory of mind is the ability to deceive. Animals do it all the time. In some cases deception is actually hardwired into a creature’s biology by evolution. Even in insects and plants we can find examples of false signals intended to convey the message to potential predators that they are poisonous when in fact they are not. But when it comes to skullduggery humanity can reach levels of sophistication other species couldn’t even begin to imagine, let alone implement.
The latest example of the use of information mined from our social environment and exploited for nefarious purposes involves the use of data gathered on around 50 million Facebook users by Cambridge Analytica, a company specializing in targeting voters and consumers on behalf of clients in order to “move them to action.”
If you had just arrived from Mars you might be forgiven for thinking that perhaps the London based data firm with academic ties to one of England’s best known universities was the first to ever seriously undertake an effort to intentionally manipulate millions of people without either their knowledge or consent. However, such manipulation has been playing an increasingly overt role in our society since the early 20th century.
From Madison Avenue to political capitals around the world, psychology’s latest ideas regarding why people believe and behave the way they do have been a source of increasing fascination since at least World War I. After all, nothing requires a good sales pitch more than a war being fought for reasons that are as opaque as the blood tinged mud of the Somme and Verdun.
In his book How Propaganda Works, the philosopher Jason Stanley describes propaganda’s appeal this way:
Propaganda is not simply closing off rational debate by appeal to emotion: often emotions are rational and track reasons. It rather involves closing off debate by ‘emotions detached from ideas.’ According to these classical characterizations of propaganda, formed in reflecting upon the two great wars of the twentieth century, propaganda closes off debate by bypassing the rational will…Propaganda is manipulation of the rational will to close off debate.
Behaviorism was among the first non-Freudian theories to emerge in the developing field of psychology. It isn’t so much closed off to ideas as it is tailor made to advance any notion that happens to come along without concern for either its validity or ethical implications. Behaviorism’s founding father, John Watson, was a living testament to the amoral character of the doctrine of human nature he promoted. His experiments could be downright cruel, but making his point seemed to justify the means in his mind. Though not itself a form of propaganda, behaviorism’s linear mechanistic notions of human motivation made it the perfect psychological theory for both governments and industries increasingly seeking “scientific” means of mass manipulation.
Unlike the Freudians and Jungians preceding him, Watson saw people as scaled up and somewhat more sophisticated versions of Pavlov’s dogs. Perhaps we didn’t salivate as obviously when we heard the proverbial bell ring, but our responses to stimuli were typically no less conditioned. More importantly from the perspective of advertisers, politicians, intelligence agencies and other interested parties, Watson’s theory of human nature rendered us predictable and came without the messy and often baffling interpretations of the human psyche that men like Freud and Jung were known for.
To demonstrate humans are ultimately indistinguishable from Pavlov’s famous canines, Watson experimented on an 11 month old dubbed “Little Albert.” This unsuspecting infant was conditioned to fear rats, though it turned out the test had other effects beyond what even Watson could have anticipated. In The Attention Merchants, the media and technology writer Tim Wu describes the “Little Albert” experiment as follows:
“[Watson induced the phobia of rats] by striking a metal bar with a hammer behind the baby’s head every time a white rat was shown to him. After seven weeks of conditioning, the child, initially friendly to the rodent, began to fear it, bursting into tears at the sight of it. The child, in fact, began to fear anything white and furry — Watson bragged that ‘now he fears even Santa Claus.’”
It should be obvious why behaviorism has considerable appeal to the advertising industry and certain professional political campaigners eager to find a short cut to the hearts and minds of the voting public. If we can in fact be conditioned to respond to a particular message or signal by buying a specific product or voting a certain way, the person or firm finding the best means for conditioning the most people will literally make themselves rich selling this service to the highest bidder.
What gets consistently overlooked to this day is the fact that “Little Albert” didn’t just develop a phobia of rats, but of other things as well. In poor Albert’s mind harmless rabbits and benevolent if fictitious characters like Santa had enough similar fuzzy qualities to induce anxiety. In other words, Watson didn’t so much prove that conditioning works on people — or at least people in the very early stages of emotional and cognitive development — as demonstrate that conditioning produces all sorts of unintended responses in addition to the intended one. This potentially leaves behaviorism’s predictive power as watered down and ineffectual as a homeopathic remedy. It also raises a number of thorny ethical questions regarding its application to both individuals and large groups.
That all behaviorism ultimately demonstrates is that under the “right” circumstances people will begin to associate two or more otherwise unrelated things with each other hasn’t kept it from having a powerful placebo effect on corporations and candidates convinced by the appeal of simplistic formulaic approaches to human complexity. It is precisely this kind of appeal that Cambridge Analytica was able to take advantage of.
Give me a dozen healthy infants, well formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select — doctor, lawyer, artist, merchant-chief and yes, even beggar-man thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. ~ John Watson
Cambridge Analytica’s work on the Trump campaign is a clear example of how data-driven marketing techniques can change behavior in target populations. Applied to the commercial sector, these techniques can strategically engage your key audiences, improving conversion rates and boosting sales. ~ Cambridge Analytica’s website
For quite some time the news has been full of stories about social media’s ability to provide insights into the human condition we otherwise wouldn’t have. By now we’ve all heard or read about the potential for Google search trends to reveal everything from pending flu pandemics to our secret sexual desires and hangups.These stories have convinced much of the public as well as industry, governments, and other institutions of social media’s power as an analytical tool.
It’s not that Google searches don’t say something about us. It’s just that virtually everything we do says something about us. To really get to the heart of the matter we must address salience and context in addition to correlation. That requires real research and that kind of effort requires money. That’s why so few are willing to engage in truly meaningful ways with the data social media captures.
Here are just a few of the questions that we should be asking:
- What exactly does a particular data point reveal and how should it be weighed against all the other actions a person takes in the course of their day?
- To what extent is two or more people clicking the thumbs up icon under the same story an indication that these individuals share the same or similar personality traits?
- To the degree people could arguably have been conditioned to “like” (or dislike) something in either the more traditional sense or in a social media context, to what extent have the same environmental and social influences conditioned them to do so?
As with the rest of an individual’s life, the list of variables that influence a person’s choices online gets long quickly. To find out what they are will necessarily involve more than just searching the data for patterns. It will involve follow up interviews or other forms of direct outreach with a significant number of the people providing the data in the first place. The “like” icon on Facebook doesn’t allow a person to indicate how much, on a scale of 1 to 10, the person liked the post in question. Nor does Facebook provide a dropdown menu people can use to select what motivated them to like it in the first place. Maybe they had a stronger connection to the person sharing it than they did the content itself. Who knows? Certainly not any of the firms out there pitching themselves as the one with the magic algorithm that reveals the answers to these questions.
But neither scientific integrity in particular or ethical standards in general were high on Cambridge Analytica’s priority list when they gained access to the Facebook habits of 50 million users and began searching the data for patterns. As is usually the case when it comes to the use of big data, the focus is almost entirely on correlation with little to no effort being put into the follow up research necessary to determine what, if anything, the correlations found in the data actually mean.
Both the crime rate and ice cream consumption go up in the summer, but it doesn’t follow that criminals like ice cream or that ice cream consumption causes crime. In addition, piracy has dropped as global temperatures have risen. Should we conclude that climate change is therefore linked to a decline in piracy? These are silly examples, but no more silly than many of the ones actually being offered as proof of concept by some data analytics firms. Cambridge Analytica’s website actually briefly references a correlation they found between car ownership and voting history, boasting that this is the kind of information a candidate can expect to find in their massive database. That there’s no reason to believe that knowledge of what a person drives will tell us anything meaningful about their concerns as a citizen seems not to have even occurred to Cambridge Analytica, or apparently to their clients.
Regardless, wouldn’t we much rather have candidates looking at files that describe how we actually feel about education, healthcare, and the environment instead of analyzing our car ownership records and driving habits for clues about how we’re inclined to vote in the next election? Unfortunately for us, neither Cambridge Analytica or other targeting firms care much about the science behind what they do. They seem to care even less, if that’s possible, about civics. Like John Watson before them, they genuinely believe human beings truly are programmable machines that can be made to behave in particular ways if only they can identify the right correlating buttons to push. To them we’re not citizens, spouses, parents, siblings, or friends. We’re all just their Little Alberts.
Tim Wu points out in The Attention Merchants that targeting isn’t exactly a new phenomenon. That we can make certain assumptions about people according to where they live, the magazines they subscribe to, whether or not they attend church weekly, etc., has long been broadly asserted.
Of course these assertions are not completely without foundation at the population level. However, it’s never safe to assume that just because a person lives in a particular place or belongs to a particular group they share the same attitudes or beliefs which, on average, can be identified with the group as a whole. Every community has its outliers. In many respects these outliers are far more interesting and informative than the bulk residing closer to the peak of the bell curve. That said, we have a name for the habit of making assumptions about people based upon real or perceived characteristics that have become associated with their group. It’s called stereotyping. That companies in the stereotyping business like to refer to it as “targeting” instead doesn’t make it any less pernicious or fallacious.
Wu tells us that a business known as “Claritas” was “probably the first modern targeting company.” Claritas was built around a concept known as “audience fragmentation,” a reference to a cable television term used in that newly emerging industry to describe increasingly identifiable segments within the cable TV market. Cable television was just becoming popular as Claritas opened its doors in the late 70s. “Of course,” Wu points out, “it was never entirely clear whether ‘fragment’ was being used as a verb or a noun: Were the [cable] networks reacting to fragmented audiences, or were they in fact fragmenting them?” Wu concludes that “In retrospect, they were doing both.”
The problem was then, as it is now, that by targeting people in specific areas in particular ways the very geographical and ideological divides the targeting company’s model assumes already exist risk being either created or enhanced. Cause and effect become difficult to distinguish when engaging in the act of targeting produces the world targeting claims is already there.
Behaviorism may have demonstrated that, up to a point, we can condition people to believe and do all kinds of crazy things. However, as John Watson’s cruel experiments on Little Albert show, it never seriously stopped to consider whether or not we should or to what ends we should limit its application. It is precisely because advertising, social media, and targeting have the power to create and reinforce (i.e., condition) the environment their algorithms claim to uncover that ethics as well as science must be central to any assessment of the methods and technologies these industries utilize. Data doesn’t just mold and often skew our own perspective. To the extent it is actively used by others without our consent to determine the information, products, services and choices that will be offered to us it will reshape the world to fit agendas, both conscious and unconscious, that we would likely be better off without.
Cambridge Analytica is just the latest consequence of the belief that people are blank slates; easy marks for additional conditioning experiments using the modern equivalent of bells and metal rods to to make us crave or fear particular products or groups. Madison avenue and political campaigns have been showing and sending us targeted material rationalized implicitly by this premise for decades. The rise of social media and the modern computing power it utilizes have, however, added new urgency to the need to critically reflect upon the flawed psychological theories and amoral philosophies behind the practice.
Madison Avenue and professional political operatives are never likely to seriously consider the ethical consequences that follow from their cynical and simplistic view of the human condition, never mind confess to it. That’s why we must. Whether or not you decide to delete your Facebook account in response to the latest scandal, that large numbers of us are actually taking that choice seriously for the first time signals a renewed willingness to proactively shape our own world instead of having it shaped for us by others. Perhaps Silicon Valley at least will realize that the species they’ve been evaluating through their algorithms is an X factor that still retains the capacity to surprise them.
Other stories by Craig Axford that you might like: