Truth, lies, and an ethics of personalization

Johan Ugander
2 min readJan 23, 2017

--

I consider myself very knowledgable about how online ad targeting and personalization work; it’s core to my research as a professor at Stanford. Nothing about the technology involved in political ad campaign targeting surprises me. What has surprise me recently is how it’s being used, and the lack of scruples on behalf of those using it in these ways.

Specifically, consider Cambridge Analytica, identified by the NY Times as the hired guns behind Trump’s online targeting. If you watch the following 11min video of Alexander Nix, their CEO (specifically from 1m30):

you see Nix advocate for “behavioral communication” in online targeting. His example is alarming: if you own a private beach, he notes, you’d have more success keeping people off your beach by putting up a “Warning: sharks beyond this point” sign vs. a “private property” sign. The problem is: he recommends this strategy — and personalized versions of it — without any consideration to whether there actually are any sharks, advocating “behavioral communication” that is completely detached from any truth about reality. In fewer words: crafting lies, and then targeting them.

And there-in lies a major ethical problem: it’s one thing to personalize content, “telling the story that’s most persuasive for a given individual,” which itself raises important ethical questions (cf. ideas around libertarian paternalism). But it’s another matter entirely to “tell the lie that’s most persuasive.” It’s not made clear that Cambridge Analytica has actually crossed into that territory —my complaint above was first and foremost a reaction to Nix’s unscrupulous metaphor — but the mash-up of personalization and “alternative facts” is a dark side of the force that I hadn’t considered until recently, and I really hope that we academics can train thoughtful data scientists who reject such applications of their skills.

I hope that the students I train in my classes at Stanford go on to use the tools I give them ethically, and I’m worried about how the data scientists and analysts running such campaigns are being trained. I spent some time this morning surveying the LinkedIn profiles of employees at both Cambridge Analytica and SCL Group (their British parent company). From my survey, essentially none of the employees are within the first two degrees of my LinkedIn network, and none of them appear to be trained at schools where I have close colleagues teaching. And I noticed that several of their data scientists are ex-physicists who have taken a slew of Coursera or edX classes on data science.

I’m a strong supporter of open education, but this observation makes me wonder how good those classes are at teaching data ethics. As a colleague noted when I pointed this out this morning, as courses turn into micro-courses or modules that students choose, it’s important to think about what goes missing from the curriculum.

So: if you design or teach classes on those platforms, please think twice about who your students are, and what they’re doing with the tools you’re giving them. And if you are a data scientist building tools for unscrupulous targeting, I strongly urge you to consider the implications of your craft.

--

--

Johan Ugander

Assistant Professor, Management Science & Engineering, Stanford University. http://stanford.edu/~jugander/