Will automated recommendations doom us all?
No. They won’t. Come on. What a clickbait-y, alarmist headline that was!
But now that we have you here: Can algorithmic suggestions go too far?
With all our talk about tailoring event recommendations to individual users, it’s easy to forget that there are legitimate concerns about using algorithms to customise what people see.
If asked, we’ll all claim we want content that’s uniquely tailored to our exact needs. In theory.
In practice, however, there are trade-offs and risks involved. These include:
- Algorithmic discrimination
- Filter bubbles
- Privacy concerns
Let’s take a closer look at each of these concerns and discuss what companies, including Billetto, can do to address them.
This happens when an algorithm deprives you of information or gives you irrelevant suggestions based on your belonging to a broader group of people.
Example: You don’t see suggestions for knitting classes because you’re a man, even though you “just adore” knitting. (Your secret is safe with us, Bort.) Or you get offered Zumba classes because you’re a woman, even though you hate these with a passion.
How big of an issue is it? Algorithmic discrimination is absolutely a thing. Women might see fewer ads for high-income jobs than men, while poor people might be more frequently exposed to high-interest loan offers.
Are algorithms racist, sexists, or classist? Not quite. In the case of machine learning, they simply consume the data that’s out there, so they might pick up our existing societal biases in the process.
In other cases, the person programming or using the algorithms may pass his or her own biases on.
What we can do. Here’s the good news: Once we’re aware of algorithmic discrimination, we can program our way around it. At its most basic, we can tell our algorithm to ignore the broader demographic data altogether.
Alternatively, we can design the algorithm to only factor this data in after it knows more about the person’s individual preferences.
To that end, we at Billetto focus on a person’s actual interests and behaviour rather than their demographic data. But this has its own set of risks, namely…
These are somewhat similar to algorithmic discrimination but are based on your actual behaviour rather than the larger demographics.
As you are presented with more and more of what you already like, you forego the opportunity to discover something new. Before you know it, you’re stuck in your own “filter bubble.”
Example: You attend a knitting class. The next time you open your computer, you see an ad for another knitting class. “Why not,” you think. Five classes later, all you ever see anymore are ads for knitting classes, knitting needles, and mugs with “Bort” on them.
How big of an issue is it? This is hotly debated. A study published in early 2016 looked at the impact of filter bubbles. It concluded that “at present, there is no empirical evidence that warrants any strong worries about filter bubbles.”
Then Brexit and Trump happened. Now, the topic of filter bubbles and their role in creating politically radicalised echo chambers is back on the table.
In general, observers agree that while “filter bubbles” might have a modest effect, they are usually a symptom of larger societal issues rather than the cause of these issues.
What we can do: Having said that, understanding the risks of filter bubbles allows us to mitigate them. In the context of event recommendations, for instance, we can insist on a healthy mix of curated content and automated suggestions.
Hey, remember those dynamic widgets we talked about? That’s exactly what they help us achieve.
How much personal data do you give up to get those customised recommendations? Where does this data end up and how is it used? How comfortable are we handing out our personal details in order to get a better, more personalised service?
These are persistent questions. We even have apps that aim to deliver personalised news content without asking for your private data.
How big of an issue is it? While people certainly aren’t thrilled about handing over their personal details, they tend to understand there’s a trade-off. The better the algorithm knows you, the better it can help you.
In a recent survey of 52,000+ people from 26 different countries, Reuters Institute found that even though people worry about privacy, they do want personalised recommendations.
Thanks to Amazon, Netflix, and Google ads, we’ve all become accustomed to our data being used to deliver personalised content. We just want the ability to have a say in what we share.
What we can do: Companies should at the very least communicate that they use people’s personal data. In fact, they’re often obligated to do so, as in the case of the EU Cookie Law.
Ideally, users should be able to opt in and out of sharing or — better still — modify and delete this data on demand. Amazon’s “browsing history” is a great example of this.
So while automated recommendations present a number of challenges, most of them can be minimized or avoided altogether if we’re aware of their existence.
Watch this space.
Every Thursday, we’ll be posting about the promise and challenges of personalised event recommendations, along with Billetto’s current efforts and future plans.
Have some thoughts on or experience with event recommendations? We’d love to hear them. You can leave a comment or send an email with your thoughts to firstname.lastname@example.org. We’ll read it. Promise.