Algorithmic Bias in Choice Architecture

why do all these recommendations stink?

Yesterday a popular e-commerce site suggested I buy Toni Erdmann-style fake teeth, a blow up guitar and a fidget spinner.

Spot on algorithm, spot on. Has you been a victim of offensive recommendations, too? Here’s why!

Everyday we amass a series of transactions through User-Generated Content (UGC). UGC is the underpinning of web 2.0 and social media. It is a phenomenon that empowers everyday users to be creators.

With increasing accessibility to networked technology, more and more of our uncles and omas are gracing our digital social circles. As a result, many transactions fall into the ether, and inevitably into a yyyyuge database. Thanks to recent technologies such as cloud computing, hadoop, and graphics cards, this data can be repurposed to train state-of-art machine learning algorithms, which power our everyday interactions with social media and e-commerce.

Every need eventually finds enough VC funding to become an app, platform, or website. And through these interfaces we encounter recommendations — from what news to read, to what meme to lol at.

These recommendations are probabilities spat out from an aggregate of users who resemble your cache of transactions — for whatever quandary you’ve found yourself in.

These probabilities result from the behind-the-scenes number-crunching of countless computer decks running machine learning algorithms. Machine learning is a branch of computer science used to identify and leverage patterns from the past to make better predictions about the future. As consumers, we typically encounter supervised learning algorithms, which optimize rules needed to accurately predict answers to a known question. This is excellent for many industry use cases, and as a result data scientists invest lots of time to re-shape data and re-frame questions to fit existing machine learning frameworks.

Production machine learning systems create a feedback loop between user decisions and probabilistic recommendations. This relationship is easy to take at face value when doing quantitative work. However, I want to explore the relationship qualitatively, through the lens of choice architecture. Choice architecture is the design of different ways in which choices can be presented to consumers, and the impact of that presentation on decision-making. The options, defaults, and language are left to the discretion of the architect. These often overlooked details can nudge consumers towards a choice. In our context, choice architecture is synonymous with user interface (UI). Every time we star a song, hail a ride, or share a gif, our clicks confirm predictions and harden the choice architecture for others. When this occurs systematically, the UGC used to feed these algorithms is tainted by the algorithms’ own biases. If left unchecked, algorithm-served options become off-color, less reliable, and downright wrong. This distorts the algorithm’s perception (and in some cases, perceptron) of users and limits user decision-making through compromised recommendations. Our choices gradually gravitate towards an artificial ideal, and lead towards user normalization. Normalization stems from the statistical definition and data-wrangling technique, of subtracting each value of a column by the column’s mean. This is a fundamental preprocessing step for most machine learning inputs.

Unlike previous studies revealing human-coded bias from training data and model architecture — user normalization is caused by user interfaces (UI) and choice architecture.

Much like the 2013 controversy over NYC soda portions, one would never get a 32-ounce soda unless it was presented within a well-framed opportunity (only 50 cents more!). The same can be said for algorithm-served options.

Are we knowingly manipulated, or is this an oversight of production machine learning systems?

Regardless, machine learning is ubiquitous, and their recommendations are viewed as a source of truth. It is unjust to nurture dependence for a system that is in a perpetual beta. Conversely, imposing restrictions on emerging technologies would push away industry. Machine learning can fundamentally change society in a positive way. But as consumers and creators alike, we must stop and look beyond the zeal, and plan responsibly for a future where we shape algorithms, rather than have algorithms shape us.

Whether you are a technologist or mixologist, there are steps we can take.

What can to do as engineers?

Lucky for us, choice architecture isn’t determined entirely by algorithms — but by UI designers and engineers. As such, we can encode and optimize for transparency within recommendation systems.Transparency is a broad term, let’s start with visibility and control.

Visibility

Transparency begins with seeing the algorithm’s prediction score (typically between 0–1) next to each recommendation. This low-hanging fruit allows users to get a handle of their standing within the recommendation system.

A problem I see in current recommendation systems is the fear of bad predictions. Options with prediction scores under some percent threshold are seldom shown — after all why should they be? As a result, many discovery and exploration features regurgitate permutations of content that you’ve already engaged with. This cycle muscles out novel concepts you should be discovering.

Control

It would be great to not only see prediction scores, but to also be able to filter and score them. I am especially interested in seeing the choices we are least likely to make. As an example, the Twitter extension Flipfeed aims to expose users to the feeds of others. In a similar vein, the option to view content through the lens of social aggregates or elites, would be an interesting way to explore commerce and social media platforms.

As engineers, you can create more versatile systems that allow users to hack choice architecture. This has an added benefit of increasing the diversity of interactions used to re-train a production machine learning model. Ultimately I think this will improve recommendation systems.

What can we do as consumers?

We should be able to override the options given to us. That doesn’t mean outright deleting an app from your phone. It means having an option that says “these options stink.” It means allowing us to fill in the blank. It means straining an algorithm to adapt to our curiosity.

I want machine learning to be used to challenge us. I want us to challenge machine learning. This allows both parties to re-calibrate, and expose our rigidity and biases. As an example, Go players are changing the way they play the game by learning from AlphaGo. This gets us closer to a human-machine version of a Generative Adversarial Network (GAN). GANs are an exciting new AI algorithm, where two neural networks try to trick one another between fabrication and classification. Learning from how we trick machines, and how we are tricked by machines can be a good thing — it is something that should be done with rigor and whimsy.

Take a few minutes to look over user settings, and see what is missing. Typically there are avenues to make feature requests. Tech companies have a lot on their plates, but they are quick to change if they see the need to!

Thank you for reading this, and thank you for whatever recommendation system led you here. I am excited to see how consumer-facing ML matures, and hope some of the points here end up on a roadmap… somewhere.