AI is risky business. What are the rules of engagement in an augmented world?

Stacey Billups
Rat's Nest
Published in
2 min readMar 15, 2019
Photo by Su San Lee on Unsplash

This is the central question I am thinking about as I venture into machine learning projects. In the past, we have seen political and social unrest as a consequence of an unintentional approach in the technology applications that go to market. An example of this is WhatsApp and the spread of villagers being overwhelmed by child-kidnapper rumors, which led to violence on the streets.

The desire to get a minimum viable product (MVP) out the door and “move fast and break things” has lead to applications with unintended consequences. It is even more of a risk in machine learning than in previous technologies.

With intelligent applications, the risk of unintended consequence goes up as the machine learning models become more accurate because the ability to understand the models becomes more difficult and falls into a handful of specialists, typically data scientists and engineers. Additionally, as models are being trained, risk increases as we move from descriptive analysis into identifying, then prescriptive and predictive analysis.

The path to responsible innovation requires us to analyze more intently and reflect on bias, principles and values. We need to look at the questions of governance and what control, ownership and access means in this context.

We will go more more deeply into the topic of designing AI with an intentional and ethical approach at Normative on Wednesday, March 27th. I hope to see you there!

--

--