Understanding the public perception of AI

GoodAI
GoodAI Blog
Published in
9 min readFeb 18, 2019
Image via www.vpnsrus.com

In the past few years, various forms of AI have begun to creep into our daily lives whether we know it or not. It is being used by major companies to improve their services and make their processes more efficient [1].

In 2017 Andrew Ng went as far to say “just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” [2]

Less than two years on and we see AI being widely used in industries such as retail, finance, healthcare, automotive, public transport, and many more. As the possibilities become more clear, and the problem-solving potential increases, this widespread adoption of AI is likely to continue. AI will not only assist us in carrying out various tasks and making better decisions but increasingly will perform the task autonomously and replacing us as a result. But what do the public think of AI, and why is it important?

In this blog post, we look at some recent surveys on public perception of certain issues and we gather information from some large scale surveys related to the perception of AI in order to paint a better picture of how the public views AI.

Wrong about the world?

Deployment of AI as of any other new technology very much dependence on the public acceptance of the use of the technology. In other words for AI to continue to flourish it will need to be supported by the general public. Many future developments depend on public opinion, but many surveys have concluded “public perception” does not always coincide with reality. Last year Our World in Data [3] demonstrated this point in a series on blog posts based on an Ipsos survey [4] which spoke to 26,489 people across 28 countries.

One of the main take away point was that people, in general, have a rather biased knowledge of the world. For example, the most pressing perceived issue globally was found to be terrorism, which accounts for only 0.06% of deaths globally, while health was seen as the 8th most important despite health related issues (cardiovascular diseases, cancers, and respiratory diseases) making up over 50% of all deaths worldwide. This kind of bias is also found across other issues.

In their Perils of Perception 2018 [5] study, Ipsos showed “how wrong people across 37 countries are about some key issues and features of the population in their country.” For example, people continuously overestimated the proportion of people “unemployed and seeking work” within their own country.

The same overestimation exists for the number of immigrants, e.g. in the UK people hugely overestimate the proportion of immigrants in Britain with an “average guess of 24% when the actual figure is around half that (13%)” and the average guess for Muslim population is also extremely high “four times the actual figure (17% vs the 4% reality).”

Clearly, misperceptions such as these can create great friction within societies. These issues are given an unjustified yet prominent place in social and traditional media and are not often sufficiently counterbalanced by facts. They gain high visibility and therefore become even more over exaggerated. For some issues such as immigration, it is even some politicians who enhance this point as it has proven to be a valuable tool in elections and gaining populist support, often focussing on emotionally charged issues rather than other social problems.

These studies show that people often have strong biases and find it difficult to accurately grasp the reality of issues that have been present in society for a very long time. These biases seem to be enforced by traditional and social media, a lack of education, and even by politicians, and they affect the way people view the world as a whole. Therefore, it is interesting to explore what are the thoughts on emerging phenomena and technologies and what the related biases are.

Automation

According to a McKinsey Global Institute study [6] between 400m and 800m people could see their jobs automated and would need to find new jobs by 2030. With automation becoming a serious consideration it is a key issue for the public. However, the Gallup report [11] showed that people rarely fear to lose their own job. It is seen as a problem which will affect others.

With regards to automation, another recent Ipsos study [7] (undertaken in the UK) has suggested that the British public “sits somewhere between ignorance and suspicion,” when it comes to automation. For example, “a majority (53%) would not feel comfortable with AI making decisions which affect them”. This suggests that people have limited knowledge of how AI is already making decisions (e.g. their news feed or in financial decisions).

Finally, a 2017 study on automation of work [8] showed that AI at work is perceived as a “threat to most employees,” for 35% of respondents and as an “opportunity for employees,” by only 15% of them while 39% declare AI is “an expected evolution, neither positive nor negative.”

The survey shows that many people fear the loss of their jobs. However, large scale job loss as a result of emerging technology is an issue which is not yet high up on government agendas. It is an issue which is still being discussed on the peripheries. This lack of serious discussion around a possible future to work also causes large frictions within society and could see us go head-on into a situation of mass job loss without a concrete plan of how to deal with the repercussions. We have discussed this issue in another blog post [9] which outlines some possible ways forward, dealing with the pace of change, and motivating and retraining the workforce.

Good or evil?

With artificial intelligence people fear more than just losing their jobs. A recent study by the Center for the Governance of AI at the University of Oxford found that there are more Americans who think that high-level machine intelligence will be harmful than those who think it will be beneficial to humanity [10]. The survey showed that 22% thought that the technology will be “on balance bad,” 12% think that it would be “extremely bad,” leading to possible human extinction. 21% think it will be “on balance good,” and 5% think it will be “extremely good.”

Some of the biggest fears were explored in the survey with data privacy, cyber-attacks, and surveillance seen as some of the most important issues which will impact large amounts of people globally. While autonomous weapons, autonomous vehicles, and value-alignment were seen as important issues, but ones which might impact fewer people.

Most of the issues seem to be perceived as highly important (between 2.50 and 2.60 on scale 0=not at all important, 3=very important). The fears of the American public are somewhat in-line with the recent paper The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation [11] in identifying potential threats. In this paper, the authors recommend that “policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI,” and that we should “actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.” The study also found that “Americans trust tech companies and non-governmental organizations more than the government to manage the technology.” Therefore, it is vital that these dialogues take place, and they include a wide number of stakeholders in order to help the flow of information to the public, to avoid hysteria and the spread of misinformation regarding AI.

One of the main fears of AI and other technologies is it being used to enhance certain viewpoints or biases that people already have. An example of this is the technology used to customize people’s social media news streams. People see only what they are interested in, this has been criticized as creating “filter bubbles,” or “echo chambers,” [15] where people are not exposed to different views and ideas, but just one specific set of ideas that are re-enforced. In their recent paper Filter bubbles and fake news [12], Difranzo and Gloria-Garcia have suggested that these phenomena heavily impacted the 2016 Brexit referendum in Great Britain as well as the presidential election in the United States in the same year. As we explored above it is clear that people have biases about certain issues and AI can be used to further enhance these biases. If AI is going to gain a positive reputation in society in general, efforts need to be made to avoid these situations.

Possible Futures

The public perception of AI is important as it could direct the future of research and development of AI. There are many scenarios which will very much depend on the media and discourse led by politicians.

One scenario is underplaying the impact which AI might have on society, this may lead to a lack of preparation for change and poor country adjustment (e.g. public funds devoted to retraining and research on counterbalancing job loss etc). As mentioned above alternatives to work, once automation kicks in, is an issue that has not been talked about enough in the public eye and has been so far rather neglected by politicians.

The opposite scenario could also take place, where people have a hysterical reaction (similar to the reaction we see to immigration and terrorism) which will negatively impact AI research and development and halt the uptake of automation. Both of these scenarios will likely undercut a country’s international competitiveness.

These are only two scenarios, but they demonstrate the dangers of miscommunication.

Conclusions

AI can be used to both deepen or fight various biases, including biases in the perception of AI. In terms of enhancing already existing biases, and spreading fake news, it is vital that powerful probably AI-based mechanisms are put in place for user/customer protection against sophisticated media strategies and ad manipulation. Such AI-based mechanisms may help limit the damage done and try and bring people’s perceptions closer to reality. Aiming to stamp out fake news, or mark ads that are personalized based on one’s past views, mood or bio-indicators (which leave people more open to manipulation), could also have a positive impact on the better alignment of public perception and reality.

In addition, we propose to work on a strategy of how to communicate with people on AI topics in a comprehensive and constructive manner, recognizing the risks while avoiding hysteria. Starting with issues which resonate most with the public we should aim to provide a fruitful ground for engagement and wider collaboration.

These efforts should help us prepare adequate “Future checklists”, a set of concrete recommendations which can help citizens start preparing, today, for a future with AI.

References

[1] 16 Examples of Artificial Intelligence (AI) in Your Everyday Life
https://themanifest.com/development/16-examples-artificial-intelligence-ai-your-everyday-life

[2] Andrew Ng: Why AI Is the New Electricity. Standford Business. Website.
https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity

[3] Our World in Data. https://ourworldindata.org/

[4] Global Perceptions of Development Progress: ‘Perils of Perceptions’ Research. https://www.ipsos.com/en/global-perceptions-development-progress-perils-perceptions-research

[5] Our misperceptions about crime and violence, sex, climate change, the economy and other key issues. https://www.ipsos.com/en/our-misperceptions-about-crime-and-violence-sex-climate-change-economy-and-other-key-issues

[6] Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages#part%203

[7] AI, Automation, and Corporate Reputation. https://www.ipsos.com/en/ai-automation-and-corporate-reputation

[8] Revolution@Work: Fears and Expectations. https://www.ipsos.com/en/revolutionwork-fears-and-expectations

[9] GoodAI, (2018). AI and work — a paradigm shift?
https://medium.com/goodai-news/ai-and-work-a-paradigm-shift-7b314268bf05

[10] The public expects high-level machine intelligence to be more harmful than good https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/high-level-machine-intelligence.html#subsecharmgood

[11] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf

[12] Filter bubbles and fake news. https://www.researchgate.net/publication/315953992_Filter_bubbles_and_fake_news

--

--

GoodAI
GoodAI Blog

Our mission is to develop general artificial intelligence — as fast as possible — to help humanity and understand the universe