Artificial Intelligence (AI) is coming. We are witnessing major breakthroughs in everything from self-driving cars, to computers winning game shows, to the emergence of personal assistants such as Siri, Google Assistant, and Alexa.
One day soon, we might talk to our devices the way we talk to our friends. And our devices will talk back. AI will be in our homes, our cars, our phones, our household goods — technology woven into the very fabric of our lives, living alongside us from dawn to dusk.
Despite the fact that all experts agree that AI could either be the best or the worst thing to happen to humanity, little serious research is devoted to ensuring the best outcome.
Why we need a more democratic approach
Today, AI is being developed by a handful of tech corporations in China, London and San Francisco, meaning it’s in the hands of the few — not the many people.
Indeed, a relatively small group of people — for the most part, privileged white men — are designing AI “personalities” behind closed doors. And what they design will live with all of us. The important notion is, that these programs are not objective, logical, and unbiased. Rather, they will be just as prejudiced as the people that develop it and the data that’s fed into it.
Remember Tay? She was a chat-bot designed to speak like a teenager, which Microsoft launched on 24 March 2016. Built to mimic and converse with Twitter users in real time, Tay barely made it to nightfall before her tech overlords pulled the plug.
Her crime? Racism, xenophobia, and the very worst of humanity, 140 characters at a time. For good measure, Tay even endorsed Trump for president, saying “he gets the job done”.
Tay was programmed to take its cues from its interactions on Twitter. Clever enough, in theory — but in reality it was left vulnerable to trolls who duly abused its machine-learning capabilities and taught it to spew a stream of sexist, racist filth.
Like the Mirror of Erised — which, as Albus Dumbledore explains to Harry Potter, shows us “nothing more or less than the deepest, most desperate desire of our hearts” — Tay was merely a reflection of the humans she chatted with. Or, as computer scientists like to say, garbage in, garbage out.
Other examples include search engines showing women fewer ads for high-paid jobs than men, concerns about racial bias inherent in predictive policing software, sophisticated facial-recognition software giving somewhat less sophisticated labels to photos of black people, and a growing corpus of research into AI programmes exhibiting racial and gender biases.
Google swiftly apologised and updated its facial-recognition software. Even so, as the Guardian put it, these examples “raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons”.
Besides concerns about biased algorithms, AI also raises fears of a displaced workforce in an increasingly automating economy and the emergence of a widespread surveillance culture, as well as AI’s ability to out-invent human researchers, outsmart the financial markets, develop weapons we cannot control, and even become an existential threat to humankind.
Despite all the fears, though, AI could also offer tremendous benefits to the lives of the many people; everything that civilisation has to offer is a product of human intelligence; so what would we be able to achieve when this intelligence is magnified by the tools that AI may provide, but diseases, climate change, war and poverty could become a thing of the past.
Do You Speak Human?
It is in this light, therefore, that SPACE10 — IKEA’s external future-living lab — has launched Do You Speak Human?, a playful research project designed to shed light on what kind of AI we want and involve more people in thinking about what we are in the process of creating.
We wanted to trigger a broader conversation about AI and involve more people in reflecting about which relationship they wish to establish with technology in the future. We believe we need a more democratic approach to AI—and to discuss what we can do today to improve the chances of avoiding the risks and reaping the benefits tomorrow.
Do You Speak Human? is a survey seeking to understand what kind of AI we want. Do we, for example, want our technology to have a personality, to be human, to have a name? Should it be male or female? Would that make a difference? And should he or she always be obedient?
Moreover, should AI be able to detect and react to our emotions? Fulfil our needs before we ask? Stop us from making mistakes? Know us better than we appear to know ourselves?
Do we want AI to reflect our worldview? Or to challenge it? Do we want it to be religious? Or atheist? And how much privacy and data are we willing to exchange for letting AI ease the process of living?
With Do You Speak Human?, SPACE10 invites you to assemble your ideal AI personality as an opportunity to help understand what kind of AI we want — and what kind of relationship we’d like to establish with this technology. Only by democratising the design, we can ensure AI is a force for good.