Artificially Intelligent Discrimination

Henry Innis
3 min readApr 22, 2019

Some thoughts on why algorithms are inherently conservative, and how in a changing world they may reinforce discriminatory practices.

Algorithms are everywhere.

Increasingly, algorithms are deciding the content, the offers and the interactions we have across the web.

They are pervasive, powerful and incisive. Often, they’re badly understood. Businesses turn algorithms on without knowing what they do. How do they operate? What are their goals? How do they structure those goals?

And regulators? It’s still an open question as to whether or not they know what an algorithm is yet.

Let’s look at the modern Internet experience for people today. Everyone is online. Increasingly, everything is ‘personalised’. Based on your data. Your interactions. Your problems.

Here’s the dirty secret though: a lot of the Internet is based on your ‘value’ to someone else. That’s because most of the Internet is free. A large minority of what we’re seeing is largely a product of people gamifying and capturing our attention, then selling it to others who find it valuable.

On the Internet, you’re often the product.

This means everyone is constantly living in a different bubble of the Internet. Your Facebook feed is different to your mother’s. Your Amazon experience is different to your brother’s. The Internet isn’t just personalising – it’s fragmenting through an algorithmic view of what it thinks we want.

Businesses increasingly rely on algorithms to decide their interactions, especially when it comes to pricing and offers. In fact, most businesses have installed some kind of marketing automation system to determine pricing and offers based on business logic.

Increasingly, these algorithms are becoming less obvious. Machines are creating logic that people can’t even see. And they’re applying wider and more broadly than we ever thought possible.

This will only become more common.

Businesses benefit from higher value customers, acquired cheaper and with better brand sentiment. There is no reason for them not to invest. And in the demand of the stock market (constant growth), they will continue to invest to gain advantage.

Often, that investment comes at the cost of ethics. They have already covered up data breaches for the greater good. They’ve bought and sold data legally, but in that most people would be uncomfortable with.

There’s an element of wilful ignorance to the whole thing. And avoidance of responsibility.

The more algorithms we’ve worked with the more we know their risk. Here is an example: training data.

Training data looks at what has been. It absorbs anything and everything. In recruitment, that means bringing in data about previously successful candidates. In the technology world, it means A.I is taught that males are superior to females.

This hasn’t just happened in isolated cases. One of Amazon’s machine learning tools did exactly this. That’s because machines are looking at the past. They are inherently conservative by nature. We have not yet understood free-thinking by algorithm.

Training data is a huge part of modern machine learning. What it does well is tell the machine what has worked, what the problem is and potential focus areas. That, by nature, narrows down the focus. But it narrows the focus based on the past, not the future.

In a funny way, A.I may be better at guiding us to problems than creating solutions.

Now look to the future. Marketing is based increasingly on personas. Publicis spends $4b to buy a huge amount of first party data. Given it’s stated goal of Marcel and being a platform, this should come as no surprise.

But what happens when these personas start working with the A.I?

Will ads start discounting more for rich people than poor people? Will someone who is rich, has more value long-term, get cheaper products to get them in?

Does this lead to an ecosystem of price discrimination against the poor based on their long-term value?

No one knows.

Certainly Beijing and China think discrimination based on behaviour is acceptable.

For all of our marketing automation and thinking, no one seems to wonder if the personas and algorithms will create a new ecosystem of class. One based on your perceived value, instead of the price you’re willing to pay.

That’s a dangerous world indeed.

--

--

Henry Innis

Software, programming, Python, marketing, data, more cool shit