7 effortless ways to avoid an AI disaster

Josh Sephton
HackerNoon.com

--

According to Stanford University, you can predict someone’s sexual preference from a photograph of their face. The study analyzed 35,000 images and built an algorithm that could predict your sexual orientation.

This knowledge exists now, and it can’t be withdrawn from the zeitgeist. Around the world, people are trying to figure out how to use this information for their own benefit. There are marketing professionals trying to figure out how to apply this in their bid to sell more of what they’re selling. There are insurance companies trying to figure out if they can use this as a datapoint when setting your premiums. There are healthcare companies trying to figure out if they can use this to tailor care specifically for individuals.

But there are also hate groups, trying to figure out how they can take away people’s rights using this (perhaps undisclosed) information. The question is not whether we can do this, it’s whether we should do this. Where do we draw the line?

It raises so many questions, but one stands out: Is this ethical?

In artificial intelligence, we stereotype data. We build a model which finds important characteristics from our training data and then looks for similar indicators when running new data through the model. The downside is that our model is open to bias.

--

--

Josh Sephton
HackerNoon.com

Founder of Pritchatts Consulting Ltd., making companies more profitable by making their data work for them.