We asked the internet how they’d design AI

The conclusion: we need to broaden the debate about AI

Almost 12,000 people in 139 countries have taken our AI survey — Do You Speak Human? — and the range of their responses suggest that we need to have a much broader debate about the development of AI.

We are living in the age of artificial intelligence and we scarcely even realise it. AI explains what we see in our Facebook news feed, how Netflix determines what we should watch next, and why Google Maps can predict where we’re heading when we jump in the car.

And this is just the beginning. Increasingly we will see computer-based life forms in our homes, our cars, our household goods — technology woven into the very fabric of our lives, living alongside us and making decisions on our behalf.

(Eventually, perhaps, we’ll witness the Singularity — that moment when computers “surpass humans in intelligence and kick off a feedback loop of unfathomable change”, as Wired recently put it.)

Yet the emergence of AI raises several thorny ethical questions, from its potential facilitation of a widespread surveillance culture, to its reliance on algorithms with undetectable biases, and from its reinforcement of gender stereotypes, to the tricky issue of robot rights.’

Sophia, speaking in the video above, is the first robot to be granted citizenship — by Saudi Arabia, in October 2017

Far from being hypothetical, many of these questions are rushing towards at light speed. Recent weeks have seen the “empty-eyed humanoid” Sophia granted citizenship of Saudi Arabia (no reports on whether she is permitted to drive), news that Google’s sentiment analyser “thinks that being gay is bad”, the launch of a robot with a woman’s name that does your laundry, and reports that a former Google executive wants to develop an AI god.

At SPACE10 — IKEA’s external future-living lab — we believe it is time to ramp up the discussion about AI and the ethical questions central to its development. We believe the field may require a more robust regulatory response, and we welcome efforts from civil society to shine a light on the field and ask probative questions about its development.

Above all, we want to see greater consideration of the kind of relationship we humans want to establish with this emerging technology. Earlier this year, to trigger such a conversation and encourage more people to think about the issue — in effect, to “democratise tomorrow’s AI” — SPACE10 launched Do You Speak Human?, a survey about artificial intelligence.

The results are in…

Almost 12,000 people in 139 countries, representing a wide range of ages and backgrounds, have taken the survey, which sought views on how AI should be designed. The results have surprised us. Most people prefer AI-infused machines to be human-like, not robotic. In fact, 73 percent of respondents said they want AI to be human-like; 85 percent want AI to be able to detect and react to emotions; and 69 percent want AI to reflect our values and worldview.

Taken together, the survey results suggest that people prefer AI to have human characteristics such as voices, personalities and emotional responses.
The video above introduces Do You Speak Human? — our AI survey which almost 12,000 people have taken

Imagine virtual digital assistant such as Siri or Alexa in 10 years’ time: much smarter, certainly, and with a voice and personality that’s essentially human, and a conversational range that means we could chat to it like it’s our best friend — and most likely develop a close relationship with it. This would fundamentally change our relationship with technology — making it imperative to have a public debate about what kind of AI we want, and the consequences of our choices.

Joanna J. Bryson is one of the world’s leading AI researchers. A tenured associate professor at the University of Bath and an affiliate of Princeton’s Center for Information Technology Policy, she thinks it would be a mistake to consider AI to be a person. “If you had something that’s another person who’s actually a person, that’s fine,” she told us. “We have that — we call them friends and families. But if you had a machine that was exactly like a person, then you would own it. We own artefacts. You would be owning a person, and we all agree that’s wrong.”

Julie Carpenter, a Oregon-based author whose academic research focuses on human-robot interactions, says that context is key — along with the degree to which AI is embodied and physically present. “If it’s a phone app, I’m confident I can switch off my phone or throw it across the room,” she explained to us. “If there’s a robot in my physical space, then shape becomes very important. Shape gives me cues about how to interact with it, what it can and can’t do, and possibly whether it can or can’t harm me somehow.”

AI is coming

From caregiving to companionship, the elimination of boring tasks to the creation of more leisure time, AI is likely to offer unimaginable benefits to the lives of the many people. Though it presents risks, too, it’s unlikely that an army of red-eyed robots will rise up and obliterate us all. (At least not soon.)

In fact, alarmist stories along those lines tend only to summarise “the scenario that AI researchers don’t worry about”, says MIT’s Max Tegmark, who founded the Future of Life Institute, which seeks to mitigate existential risks facing humanity, including those posed by AI. Tegmark thinks we should instead focus on what it means to be human in the age of AI, what we would like it to mean, and how we get there.

We agree. This is a debate we need to be having, not least because the threats posed by AI today spring from its human creators. They include the emergence of a widespread surveillance culture and — as we’ll see next — algorithms with alarmingly undetectable biases.

The future is female

Most virtual assistants have women’s voices — even if they are officially genderless. And there’s a reason for this. “Siri, Alexa, Cortana, and Google Home have women’s voices because women’s voices make more money,” explained Quartz. “Bot creators are primarily driven by predicted market success, which depends on customer satisfaction — and customers like their digital servants to sound like women.” Scientific studies indeed show that people generally prefer women’s voices to men’s.

So far, so interesting. But there’s a big problem here, because while people prefer a female’s voice over a man’s, according to our survey a plurality of people (45 percent) also prefer their AI to be obedient (the figure rises to 51 percent for North America).

The deduction is that many people would indeed want obedient female assistants. Which is precisely what we get from today’s tech giants — smooth, female-voiced assistants such as Siri and Cortana, at our command.

Is this the kind of world we want children to grow up in, one where they are surrounded by slavish women servants who seem real, are always obedient and able to fulfil their needs before they ask?

Earlier this year, Quartz tested the most well known digital assistants to see if they would stand up to sexual harassment. The conclusions were grim. Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google Home “peddle stereotypes of female subservience” and display an “alarmingly inadequate” range of responses to sexual harassment — from flirtation, to failing to understand the majority of questions about sexual assault, to directing users to porn sites. (Told to “Suck my dick”, Siri replied with “I’d blush if I could”, meaning it was “literally flirting with abuse”, as Quartz noted.)

Philosophical concerns

Drawing from philosophers such as Kant, one could argue that we should treat animals humanely as a template for treating humans, and that there’s a slippery slope from harming animals to harming humans. Might one say the same about AI, especially if it takes on a more human form? Might sexism towards female-looking or sounding AI lead to sexism towards real women?

Bryson believes it might. “There’s a lot of concern about how people will be socialised if they are reared by and have intimate relations with things they can turn on and off at will,” she says. “This could very easily lead to further objectifying women.”

Today, with almost 12,000 people having taken the survey, we can see a clear gender divide: only 30 percent of respondents were women, which of course will affect the outcome. However, if we compare it with the tech industry in general, the proportion of women who took the survey is actually higher than the percentage of women in leadership jobs in Silicon Valley (as of April 2016) — and a bit higher than the percentage of women in tech jobs.

Does the technology emerging from Silicon Valley reflect this imbalance? “If those teaching computers to act like humans are only men, there is a strong likelihood that the resulting products will be gender biased”, reported the BBC last year. In other words, a handful of tech companies are developing the AI that will influence all of our lives in the future — and those companies are overwhelmingly male. Additionally, the way that AI objectifies women and perpetuates stereotypes might also deter more women from entering the tech industry, further exacerbating the problem.

The problem of biased algorithms

Obedient virtual assistants aren’t the only manifestation of potentially concealed bias in AI. Consider proprietary algorithms that are used, say, to rank teachers or to decide who gets a job interview, a loan, insurance or granted parole.

“Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI,” explained Will Knight in MIT Technology Review. “If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognised and unchecked, it could have serious negative consequences, especially for poorer communities and minorities.”

For example, in 2014 the Federal Trade Commission warned that it would crack down on businesses that used big data to discriminate against low-income and minority groups, following concerns that low-income communities were being targeted for high-interest loans online. Meanwhile, Oakland police ditched a form of predictive-policy technology amid fears it would lead to more disproportionate stops of African Americans, Hispanics and other minorities. Despite claims that the technology could lead to a form of racial profiling, it’s still being used in several US cities, including New York and Chicago.

According to Bryson, biases occur because of the way that AI is developed. “People really expect that AI should be better than humans, that it should be perfectly neutral. That’s partly because they have no idea what intelligence is and where it comes from. It’s computation, not maths,” she says. “The reason that AI is going so fast right now is we’ve gotten really good at taking all the computation we’ve done before and putting it in machines. Unfortunately, what we’ve already learned includes all kinds of societal biases. The biases are things that if you just look at the world, you’re going to see.”

Bursting bubbles

According to our survey, 69 percent of people would like AI to reflect their values and worldview. It isn’t hard to imagine the consequences of AI programmed to do this. After all, it already exists. Facebook provides a curated news feed, driven by its powerful algorithm, ensuring we see highly individualised content. And Google filters search results based on our location and previous searches and clicks.

Together, they and other search engines and forms of social media are creating “filter bubbles” which serve to segregate us by race, gender, class and politics, and ironically leave us less well informed. Wouldn’t the filter bubble effect only be exacerbated by AI that reflected our values and worldview, and learnt to strengthen our opinions rather than challenge them?

Filter bubbles are one consequence of AI. The BBC video above explains how to burst your own filter bubble

Joanna Bryson isn’t surprised that people want AI that reflects their views. “That’s what we want from our government, too,” she says. “We’d like our government to reflect our world view. We want coalition partners, basically.”

The survey also asked people if they wanted AI to be religious — and 26 percent said they would. One can only imagine the consequences of a form of AI that is trained by religious texts, and increasingly reflected and reinforced a orthodox, religious worldview.

And this notion isn’t as farfetched as it sounds. According to Wired, a former Google executive has founded a religious organisation called Way of the Future. Its mission? “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

The case for humanity-centred design

The short-term solution is to fix poorly designed algorithms. Mathematician Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, told the 99 Percent Invisible podcast that transparency and measurement are key, and that “researchers must examine cases where algorithms fail, paying special attention to the people they fail and what demographics are most negatively affected by them”.

But if fixing biased algorithms is the solution, as MIT Technology Review recently claimed, “opaque and potentially biased mathematical models are remaking our lives — and neither the companies responsible for developing them nor the government is interested in addressing the problem.”

The longer-term solution, then, is to diversify the technology industry. “If we don’t get women and people of colour at the table — real technologists doing the real work — we will bias systems,” Melinda Gates told Wired. “Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”

A political problem

In a recent interview with Ozy magazine, Rumman Chowdhury, global responsible artificial intelligence lead at Accenture, argued that people who develop AI should be trained to detect potential biases. “We need to design these solutions with human beings in mind,” she explained. “I would rather have a fire department in place before there’s a fire.”

Bryson goes further and says that AI is a political problem, and at the level of requiring regulation just as both architecture and medicine have. However, she says, “AI is transnational, more like an ecosystem, and no one government can regulate it in isolation.”

“Software has gone from being a bunch of toys and tools to being something that is actually core to our human social structure.”

Carpenter is also forthright about the need to debate how we’re developing and using AI, and wants to see designers, developers and AI experts help develop policies or laws. “We’re interacting with technology in a fundamentally different way,” she argues.

“It’s not simply a tool any more because we’re interacting with it in ways that aren’t just social but also emotionally meaningful.”

However, recent months have seen the emergence of what might be called the ethical AI movement. Writing in the New York Times last year, Microsoft researcher Kate Crawford argued that AI has a “white guy problem”. She founded the AI Now Institute, which is dedicated to studying the social impacts of artificial intelligence. Similar organisations have arisen to try to ensure that AI is developed ethically and responsibly, including AI4ALL, a US non-profit organisation “working to increase diversity and inclusion in artificial intelligence”.

SPACE10 supports this movement. AI is as human and as biased as the humans who make it — no matter how consciously. From today, transparency, diversity and responsibility should be front and centre of all AI development.

Do You Speak Human? represents our exploration of digital empowerment and the role of AI. To continue the conversation about the future of AI and ways to increase diversity in its development, we invite you to join us — and to take the survey if you haven’t yet. Only by broadening the debate can we ensure an ethical AI that serves the many people. After all, as Rumman Chowdhury puts it: “We are headed in the direction that we send ourselves.”

We invite you to take the survey on www.doyouspeakhuman.com

All illustrations by Sandy van Helden

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.