Privacy vs personalization: The risks and rewards of engineered serendipity
This article was originally published on GigaOm
Facebook’s 2012 newsfeed experiment has recently attracted the ire of users and even European regulators, who are investigating a possible breach of users’ privacy. Although Facebook’s tinkering with its users’ emotions was part of a minor sociological experiment, many online companies are dedicated to using personalization to enable serendipity or “accidental discovery” of content by web users.
The internet has provided the setting for the grandest and perhaps most controversial of experiments in “engineered” serendipity. Search engines, e-commerce and online news publications are all using personalization to enhance user experience by providing the most relevant content. But is it actually possible to embed serendipity into user experience online?
“The notion of ‘designing for serendipity’ is an oxymoron because once we try to ‘engineer’ it into a system, users may no longer perceive the experience as serendipitous,” says Dr. Stephann Makri, a lecturer in Information Interaction at City University in London. “Designers of interactive systems shouldn’t try to offer serendipity on a plate. Instead, they should design tools that create opportunities for users to have experiences they might perceive as serendipitous.”
Nonetheless this reworked notion of serendipity is here to stay on the web. With the rise of machine learning, a growing number of online publishers are using complex algorithms to learn from readers’ viewing habits and provide people with what they want to know before they know they want it. Evidence has been positive so far. Content personalization startup Gravity, recently acquired by AOL, claims its software increases engagement with content for publishers by 240 percent compared to non-personalized sites. In essence, we get more of the information we want to see.
The filter bubble
More engagement, however, does not always correlate with a more balanced perspective. The risk of over personalization is a lack of transparency. By taking curatorial responsibility away from editors and giving it to algorithms, the journey from content creation to its end destination on screen becomes more opaque. There is no way of knowing exactly why we are being fed the information we receive.
There is also the further danger of what internet activist Eli Pariser calls the “filter bubble.” By algorithmically cherry-picking content aligned with only our interests or that of our social group, we effectively become enveloped in our own cozy ideological bubble. Pariser cautions that this may harm both the individual and society by closing “us off to new ideas, subjects and important information.” What is therefore intended as a tool to spark serendipity can easily turn into a machine churning out an ever narrowing stream of content behind the scenes.
Pariser believes that “our online urban planners [need] to strike a balance between relevance and serendipity, between the comfort of seeing friends and the exhilaration of meeting strangers, between cozy niches and wide open spaces.”
Whereas Pariser’s solution is a web untarnished by personalization, there is also case for technology injecting more serendipity into user experience to burst the filter bubble. Two key measures need to be taken to ensure this: Firstly, the trade-off between relevance and serendipity should be readjusted by embedding more of a surprise element into recommender systems. Secondly, to ensure transparency, people should be given more autonomy in deciding which level of serendipity and personalization to receive in their content.
Personalization has also become a goldmine for advertisers. Advertisers can reap a greater return on advertising spend by capitalizing on the increased time users spend engaging with online content. The vast body of data on individual browsing activity that search engines sell allows advertisers to uniquely target advertising — to an arguably unsettling level.
Last year students and other Gmail users sued Google after it came to light that the company was scanning its users’ Apps for Education accounts. After the furor over this invasion of privacy, a Google spokesperson blogged in April: “We’ve permanently removed all ads scanning in Gmail for Apps for Education, which means Google cannot collect or use student data in Apps for Education services for advertising purposes.”
It is clear that in the future, advertisers must tread the line between creepiness and serendipity more carefully to stay in the good books of both government regulation and users at large.
Dr. Makri, who spent the last three years understanding serendipity in digital environments, believes that advertisers can make more use of serendipity when targeting their ads, “not by offering products and services that consumers already realize they want, but by making consumers aware of things that they were previously unaware existed — and convincing them that they need to buy these things.”
There is no doubt that content personalization is wrought with contentious issues surrounding privacy, transparency and opinion bias, yet the constantly improving technology behind it also holds promise for the future. “We are only at the beginning at finding out how to deliver the information that people need without them realizing they need it,” says Dr. Makri. “And if we have technology that surprises and delights us, that can only be a good thing.”
Serendipitous discovery online allows us to experience stories, opinions and ideas beyond some of the confines that the physical world imposes. The challenge for the future is to create a web that maintains an ethos of discovery without prying too invasively and opaquely into the lives of its users.