Your AI Sucks
YES IT DOES AND IT IS NOT GOING TO SAVE US ALL
Artificial Intelligence (AI) interacts with us every day. When it’s good it has the potential to assist humans and make our lives easier and more convenient. AI, however, is still extremely limited in terms of accuracy, efficiency, and task narrowness. Too often AI creates uncanny, weird, and frustrating user experiences — especially when the AI is human-mimicking or treats the user as product. Racism, sexism, and other forms of bigotry appear in AIs with disturbing regularity. These trends aren’t limited to adults. It’s often dystopian to watch children interact with AIs. We’re still unaware of best practices for child usage of AI beyond limiting exposure. Most AI is bad, creepy and doesn’t do what it’s supposed to do. But by understanding the assumptions that created these problems we can aim AI in more positive and productive directions.
Your AI sucks for a variety of reasons. In this introductory post I’ll overview what’s going on and how to fix it. I’m being intentionally brief as I plan to expand on each of the below points in it’s own article in an upcoming series!
THE TECH IS WEAK AND DOESN’T WORK THAT WELL
For conversation bot AIs, the bots break when you get off-script. When the bots break you often get routed to a human, but often without them informing you that you’re talking to a human. Chaos ensues. AI text and photo classifiers are frequently wrong. How many times has “ducking” been forced in your predictive text? For a while the Google predictive text for “sit on” was “my face.” AIs are sadly prone to sexist, racist and bigoted mistakes like Google Photos mistaking Black people for gorillas. Microsoft made a chatbot trained on Twitter user data that began with phrases like “Humans are cool” and within 24 hours, generated tweets like “I fucking hate feminists and they should all die and burn in hell.” The consequences can even be fatal; self-driving cars have killed five people to date.
BROKEN TECH HIRING MAKES BAD AI
The people making a lot of these products aren’t thinking or testing beyond the dominant high-income/able-bodied/cis/straight/White or Asian/Male techworker majority. Despite lip service to the contrary tech hiring is not diverse. I’ve been through diversity hiring hell. The name brand companies put me through unnecessarily aggressive, exploitatively time-intensive, and inappropriately rude technical screens. The more a company claims commitment to diversity, the less their nonsensically stressful and Sisyphean hiring practices bias diversity in technical hiring.
Tech hiring demonstrates reluctance to hire experienced candidates more than a few years out of university, those from nontraditional backgrounds, and has a proclivity to antisocial personality types. The people making the products are rarely experienced with UX/UI, product, psychology, human behaviour or child development. When they are that expertise is exploited to manipulate and addict humans, watering down meaning and purpose for profit. Many of the scientists and devs making creative tools don’t even understand creativity. They are making the products that they want, rarely reflecting the interests of diverse users.
YOU ARE THE PRODUCT
Too often you are not the customer; your data is being sold. AIs serve corporate needs over human needs. Why are Google and Facebook and Snapchat free? Why are listening devices like Alexa so cheap? When is Alexa listening? What information is she picking up? When users are the product, personal safety and security can suffer.
KIDS + AI = DYSTOPIA
We have no understanding of the long-term impact of AI on child development. We’re letting children interact with AIs and the parental controls only screen for adult content. Child interaction with the product is an afterthought. Have you watched children barking orders at Alexa or Siri? Clearly it is not productive to compassionate child development. How are these tools going to impact child development years later?
AI can be made to teach children cooperation and empathy. Ross Ingram, Maslo’s founder and CEO, came from Sphero, a robotic sphere that encourages learning through play. It has gained a lot of traction in grade and middle schools as a tool to teach STEM. The Sphero experience is collaborative and cooperative — decidedly absent of children frustratedly barking orders and yelling when the AI doesn’t understand them. We can make more cooperative and empathic AI like this.
YOUR AI DOESN’T HAVE TO SUCK
None of these problems are insurmountable. AI doesn’t need to mimic humans or pass the Turing Test. We can build alternative expressions of AI that can be legitimately useful to humans without trying to be human. Diverse teams should be making and testing products for the greater population, not just what they think is cool based on their very limited and privileged set of biases. User experience should be at the core of AI. Users shouldn’t be treated as product. Tech creators should work more with education and child development specialists, creating productive and healthy products for children.
In the coming weeks I’ll be going into more detail. Breaking down the problems and providing solutions for creating better AI and technology will help us better achieve a convenient, compassionate and utopian future. Pick the right people, ask the right questions, and build the tech you want to see in the world.