With great data comes great responsibility and why products like Faception can be dangerous

Kiite
Kiite
Published in
4 min readMar 29, 2018

With great data comes great responsibility

In the mid-1800s, if you wanted to identify whether someone was a criminal, you might look at the features of their face.

A large jaw, low-sloping forehead, scanty beard and fleshy lips were some distinguishing features, according to criminologist Cesare Lombroso. He studied physiognomy, which aimed to connect personality and behaviour to physical appearance.

The formal practice fell out of favour. But with the popularization of AI-driven facial recognition technology like Faception, which says it’s able to spot pedophiles, murderers and terrorists just by analyzing their faces, it’s making a comeback.

Claims like this are controversial. They’re captivating. And they can be extremely dangerous.

Most of us trust AI-based predictions because computers apply an algorithm equally across a data set. It’s assumed to be scientific, objective and unbiased. But what if biases are unknowingly built into an algorithm, a data set is flawed, or the system finds an unexpected variable?

There’s no denying there’s a lot to learn about AI technologies as we develop new systems and improve on old ones, and that’s precisely why we need to be careful when it influences our decisions.

Our stories, as told by our data

The totality of our digital fingerprint — from browsing histories to public profiles — tells stories about us. But we don’t know what these stories are, exactly, and often they’re compiled and used for purposes we aren’t aware of, without consent, and without the ability to verify or control our own information.

Consider the things we “like” on Facebook, which is just a small slice of the information we share. What patterns might an AI system recognize in our interests under the guise of science? “The best predictors of high intelligence include ‘Thunderstorms,’ ‘The Colbert Report,’ ‘Science,’ and ‘Curly Fries,’ whereas low intelligence was indicated by ‘Sephora,’ ‘I Love Being A Mom,’ ‘Harley Davidson,’ and ‘Lady Antebellum’” was an example given by .

It turns out, the things we like can be used to predict gender, religious beliefs, sexual orientation, political leanings, ethnic backgrounds, personality traits and more, with reasonable accuracy. Even when those pieces of information were never explicitly disclosed, and even when we can’t fully explain why those correlations exist.

Target is a perfect example of data-based storytelling. In 2012, it famously mailed a package of coupons for baby clothes and cribs to a Minneapolis high schooler. Her father found them and was outraged: “Are you trying to encourage her to get pregnant?” he asked a store manager.

The retailer recognized a pattern in her behaviour that fit the profile of a pregnant woman and reacted to it by sending revealing messages using an unsecured medium. That’s how her father found out she was due that August.

So, was that an invasion of her privacy?

The importance of asking “should”

“People may choose not to reveal certain pieces of information about their lives, such as their sexual orientation or age, and yet this information might be predicted in a statistical sense from other aspects of their lives that they do reveal,” says the Facebook study, which could have implications like “revealing (or incorrectly suggesting) a pregnancy of an unmarried woman to her family in a culture where this is unacceptable.”

This concern came up again with the Stanford “gaydar”, which used AI to identify patterns in facial features between homosexual and heterosexual people. The study used tools that are on the market today and data that can easily be scraped from public sites.

The methods, results and the message behind them are hotly contested among other researchers, AI professionals and members of the LGBTQ community.

Part of the blowback is because such systems could be used to persecute entire groups of society — including false positives. “There’s a definite danger in giving those who would discriminate tools with which to do so that they can claim are backed by science,” warns an article from The Next Web.

Similar things are happening in crime prediction and criminal risk assessments that could affect an individual’s future and freedom.

So when we hear about technologies like Faception, we have to take a cold, hard look at the claims they make and ethical implications they pose.

Enter responsible AI

Organizations like the AI Now Institute and The Montreal Declaration for the Development for Responsible AI are working on guidelines and proposals about how AI systems should be created and used. They include suggestions like:

  • Is the algorithm transparent, and do we understand it?
  • Is the data set we use to train our AI fair, representative and inclusive?
  • Does the data include any explicit or inferred biases?
  • Does it consider the well-being of humans (and non-human beings)?
  • Can people see and control their personal data?
  • Do the creators take responsibility for risks arising from their systems?

And here are heavy-hitters in the industry at work. “Alphabet, Amazon, Facebook, IBM and Microsoft are working together to create a standard of ethics for advancements in the AI industry” reports AI company Accenture.

Ask these questions when you hear about a new service that’s able to do superhuman feats, or claiming to have science behind them without a single independent study in sight. If we blindly trust what we hear, without knowing how a system works or verification on its accuracy and effectiveness, we’re just giving would-be physiognomists the keys to our privacy and freedoms.

--

--

Kiite
Kiite
Editor for

The future of work is being built today. Supercharge your workforce with Kiite.