Why AI won’t change the world

Eleanor Stribling
6 min readMar 28, 2018

Artificial Intelligence won’t live up to its potential unless we ask ourselves four questions about who is building it

Photo by Alex Knight on Unsplash

I’ve always found stories where inanimate objects aspire to humanity especially compelling.

When I was a little girl, I loved The Velveteen Rabbit and Pinnochio, stories where human virtues like love, loyalty, honesty and bravery could make a toy rabbit or a wooden boy real. As I got older, toys were replaced by androids: Data in Star Trek: The Next Generation, Boomer in Battlestar Galactica and Dolores, Bernard and Maeve in Westworld.

Instead of mystical forces making them into humans, these characters try to define their place in the world, often emulating characteristics of their human creators — emotions, attachments and contradictions— in hopes of gaining a place at the table, or ejecting their human creators from their seats.

Two questions tie all of these stories together: what does being human really mean, and how do we use that understanding to connect with others and find our place in the world?

This interest has been a factor in my long career in the tech industry, building software that helps people make better decisions based on data. It’s what drives me to study artificial intelligence while my kids are in bed and tinker with algorithms to pull quantitative data out of literature. Not exactly building an android, but as close as I’ve been able to come so far.

As venture capital firms pour millions of dollars into AI-based startups, its becoming clear to me that the problem is less about the limitations of AI, and more about how we stop it from repeating our limitations. We’re not on our way to making ruthlessly logical programs and machines that are nothing like us, but that we are making ruthlessly logical programs and machines that, exactly like us, base their logic on the same very human biases rooted in emotions, attachments and contradictions.

What would the tech industry look like today if we made a conscious decision and consistent effort to programming better people?

Why everyone is talking about AI now

AI is not a new idea, and has been a science fiction staple for nearly a century. The foundational concepts behind some of our most advanced technology originated in the 1940s. Research has been happening for decades at companies and in academia. IBM’s Deep Blue computer beat Kasparov at chess in 1996, over twenty years ago.

Some of the renewed interest in AI comes from development of creepy looking androids from the uncanny valley, but the more intriguing developments are in software that can be applied to multiple industries. We are getting much closer to creating programs that can learn and make choices without the past constraints of a very detailed, human-written set of rules or vast amounts of data carefully labeled by human classifiers. AI powers virtual assistants, can assess the probability of a heart attack better than a human doctor, and help Chinese farmers raise healthier pigs.

In other words, what’s exciting is that humans need to do a lot less work to get higher quality predictions, and use those predictions to make decisions or outsource the decision-making process altogether.

That’s also why we’re at a critical moment to define our relationship to the technology.

Humanity in the face of progress

For every android in science fiction, there’s a period of marveling at how humans have come so far in spite of all of their inherent flaws, and questioning of the logic behind wanting to emulate such a messed up species.

But even if we could create an android as sophisticated as Data or Dolores, it’s unlikely they’d ever have that particular crisis of reason. One of those quirky human limitations is that inequities in our society — based on emotion and fear — are already being written and trained into AI applications.

One area this has already happened is in facial recognition. This technology has dozens of practical applications, and is already available to consumers as a way to unlock some mobile devices and laptops. However, a recent study led by Joy Buolamwini of the MIT Media Lab showed that commercial versions of these programs mirror social and political inequality in our society: they identify light-skinned male faces correctly the most frequently, and dark-skinned female faces the least. The implications of this go way beyond having to enter a password to get access to a device. Law enforcement has tried to leverage this type of tech to track down suspects faster, notably in the Boston Marathon bombing, which didn’t get police much closer to any suspect. But what if it had identified the wrong person with algorithmic certainty?

Law enforcement has looked for other ways to use AI, especially around predicting how dangerous offenders might be in the future and tracking down suspects. Last month, a team of computer scientists presented a paper based on their research on crime data. They had developed an algorithm that they claimed could predict if a crime was gang-related based on just a few data points. When questioned about how the tool could be used and what if the training data was biased, one of the authors replied, “I am just an engineer”.

What we can do today

Now that we are at a point where we can actually build some of the tech that was only science fiction a decade ago, this is the moment to consider how to build it right. There are 4 considerations we must take into account to ensure we build AI that doesn’t inherit the emotions, attachments and contradictions of a few people to the detriment of everyone else.

Counteract the bias of the builders. Few people in the world lack bias — in most of us, it is so inbred, that even the best-intentioned person may not recognize it. One of the most clear and present threats posed by AI is that biased decisions may be made faster and questioned less. If our bias reflects the prejudice of a village, then the best way to counteract it is to create teams of builders who come from very different villages, who come prepared and encouraged to challenge every assumption and build AI reflects those challenges.

Hire Across Disciplines. A recent trend on tech Twitter is to talk about how we should teach engineers more about the humanities and social sciences to build products and companies with a better EQ. While I’m all for this, but it would also make sense to hire people who have spent years of their lives studying these things and teaching them some programming. Having people on your team who can help put the tech you’re building into a cultural, historical and political context helps to avoid causing harm and risks to market the viability of your product.

Build Diverse Teams. Demographics can be as important as disciplines. While it’s certainly possible for a light-skinned man to learn to look more critically at training data or what is being optimized for, the probability of a person outside of that demographic to apply a critical lens based on their experience is higher. If we had fewer non-light-skinned, non-men writing facial recognition algorithms, I’d bet that they would pick out skewed training data or inconsistant accuracy in identification a lot faster.

Ask “What if We’re Wrong?” Then ask it again. By doing the first two things, we can create environments where questions are not just encouraged, but the right ones are asked. When trying to solve any big social issue — which AI has the potential to do — the builders should keep looking forward to the future and course correcting when the answers are not pointing to a better world. We might not be able to prevent every future outcome, but evaluating each decision from a systems thinking

This is an amazing time for AI, and I think a moment that future historians of science will point to as a critical point in our technical and cultural evolution. But that means we have to be intentional, diligent and inclusive about building technology that will change the world for the better and for everyone.

--

--

Eleanor Stribling

Product & people manager, writer. Group PM @ Google, frmr TubeMogul (now Adobe), Microsoft, & Zendesk. MIT MBA. Building productmavens.io.