From chatbots to self-driving cars: what worries people about machine learning?
Utopian and dystopian visions of an AI-dominated future are everywhere, from films to tech company press releases. But what are people really concerned about? The Royal Society created a public dialogue to find out
When we don’t know much about a new technology, we talk in generalisations. Those generalisations are often also extreme: the utopian drives of those who are developing it on one hand, and the dystopian visions that help society look before it leaps on the other.
These tensions are true for machine learning, the set of techniques that enables much of what we currently think of as Artificial Intelligence. But, as the Royal Society’s recently published report Machine learning: the power and promise of computers that learn by example showed, we are already at the point where we can do better than the generalisations; give members of the public the opportunity to interrogate the “experts” and explore the future, and they come up with nuanced expectations in which context is everything.
The Society’s report was informed by structured public dialogue, carried out over six days in four locations around the UK, with participants from mixed socio-economic backgrounds. Quantitative research showed only 9% of people have heard of machine learning, even though most of us use it regularly through applications such as text prediction or customer recommendation services. The public dialogue gave people the opportunity to discuss the science with leading academics. The conversations were seeded with near-term realistic scenarios from contexts such as GP’s surgeries and schools.
The results showed common themes but they also revealed how, when it came to balancing potential risks and benefits, people gave very different weightings depending on what was at stake.
Participants talked about potential advantages such as objectivity and accuracy: better an expert and well-tested diagnostic system than a human doctor unable to keep up with the latest literature and over-tired on the day. They raised the benefits of true efficiency in public services: systems that might relieve pressure on front line workers such as police officers or teachers. Even in time-limited discussions, participants often came up with ideas as to how machine learning could enhance rather than simply replace existing tasks or jobs. And they saw the potential for machine learning to address large-scale societal challenges such as climate change.
At the same time, they were concerned about the depersonalisation of key services. The tired human doctor would still be essential to any conversation with the patient about the meaning of an important diagnosis (and some were sceptical about the likelihood of accurate diagnostic systems for mental illnesses, at least in the near term). They discussed whether the use of machine learning systems to augment experiences they currently enjoyed — from driving to writing poetry — might make these experiences less personal or ‘human’.
Posted on 7wData.be.