What’s stopping UK businesses from adopting AI & robotics?

Just because a machine can do something, does not mean that it will be bought, integrated and licenced to do so

The RSA
15 min readSep 18, 2017

By Benedict Dellot and Fabian Wallace-Stephens

Follow Benedict and Fabian on Twitter @BenedictDel @Fabian_ws

We consider whether and how AI and robotics are being adopted in practice in the UK

This article is an extract from the RSA report The Age of Automation: Artificial intelligence, robotics and the future of low-skilled work

The diffusion of technology across an economy is a drawn out process and far from guaranteed. McKinsey’s thorough analysis of historical adoption rates for 25 major technologies found it took between 8 and 28 years from the birth of a commercial innovation to its maximum take-up. A separate UK study of 14 physical technologies found it took on average 39 years to go from invention to widespread commercialisation.

One need only look at the experience of the personal computer to see how technology often infiltrates the workplace at a snail’s pace. The first — the MITS Altair — was introduced in 1975, soon followed by the Apple computer. But by 1980 still only a million had been sold in the US, and it took many more years before they became a common sight in offices, factories, hospitals and schools. Another case in point is internet of things (IoT) technology, where still only half of UK consumers own an internet-connected device.

What does the data tell us about AI and robotics? Unfortunately there is little available information on the distribution and take-up of AI systems. However, there is data regarding the extent of robotic sales. The International Federation of Robotics (IFR) estimates that worldwide robot sales increased by 15 percent in 2015 to reach over 253,000 — by far the highest number ever recorded. Should this trend continue, the IFR expect the worldwide operational stock of robots to grow from 1.63 million in 2015 to 2.59 million at the end of 2019.

Other data, however, suggests a slower degree of diffusion, with wide variation across sectors. Boston Consulting Group believe that less than 8 percent of tasks in the US transport-equipment industry are automated today, versus a potential of 53 percent. They also say driverless cars will make up just 10 percent of all vehicles by 2035 — a claim that jars with media depictions of fleets of autonomous vehicles roaming streets in the near future.

Adoption rates of AI and robotics by business size:

Source: RSA/YouGov survey of 1,111 UK business leaders (Fieldwork conducted 10th-18th April 2017)

Gross Fixed Capital Formation in the UK (% of GDP):

Source: World Bank data

The UK appears to be a laggard in the adoption of AI and robotics. Sales of industrial robots to the UK actually decreased in the period between 2014 and 2015, with the UK purchasing fewer robots than France, the US, Germany, Spain and Italy. In 2015 the UK had just 10 robot units for every million hours worked, compared with 131 in the US, 167 in Japan and 133 in Germany. While this may reflect our different sectoral make-up, UK businesses and public services as a whole suffer from stubbornly low rates of investment. ONS data shows spending on gross fixed capital formation — a measure of investment that includes plant and machinery, software and new dwellings — has barely grown in real terms over the last decade. Going further back, data from the World Bank shows the proportion of UK GDP accounted for by gross fixed capital formation has fallen by 7 percentage points since 1990 (see the graph above).

For the avoidance of any doubt, we asked the business leaders in our YouGov poll if they were deploying AI and robotics today, or whether they planned to in the near future (see the chart above).

The results speak for themselves:

  • Just 14 percent said they had invested in, or were about to invest in this technology.
  • A further 20 percent said their business wants to invest but that it would take several years to ‘seriously adopt’ it.
  • The remainder said they either thought it was too expensive (14 percent), not yet properly tested (15 percent), or ‘none of these’ (34 percent), which we assume includes many who are unaware of the latest innovations.

It is also striking that small businesses are considerably less likely than their larger counterparts to have embraced AI and robotics, with just 4 percent falling into this category compared with 28 percent of large firms.

Other research reveals a similar story. A Cisco and Capita survey of business ICT decision makers found that while 50 percent view AI as relevant to their organisation, just 8 percent are currently putting it to use. For robotics, the figures are 39 percent and 10 percent respectively. A global survey of 3,000 companies by MIT Sloan Management Review found just 39 percent have an AI strategy in place.

The failure of individual technologies and tech businesses is also telling. Following a poor run of sales, robot manufacturer Johnson & Johnson decided last year to discontinue its Sedasys machine, which was designed to automate the administering of anaesthetics. Elsewhere, Aethon’s robot TUG — a machine that undertakes basic deliveries of medicines in hospitals — was recently reported as suffering low take-up rates. Underinvestment is not limited to the healthcare industry. Market research company TechEmergence reports that ‘big box’ retailers are also ‘extremely slow to adopt cutting-edge technologies’.

What might be holding these and other organisations back? Here we explore 4 key hurdles to technological adoption: cost and business models; consumer preferences; regulatory concerns; and organisational integration.

Cost and business models

AI and robotic systems, like all technologies, have fallen in price over time and will continue to do so. The cost of purchasing and deploying a spot welding machine in US car manufacturing fell from $182,000 in 2005 to $133,000 in 2014 (not adjusted for inflation), and is expected to fall further still to $103,000 by 2025.

Yet many machines are still out of the affordability zone for organisations, not least small ones operating in tight margin industries. RIKEN’s Robear robot, which is used to lift patients in social care, comes with a price tag of between $168,000 and $252,000. Machines to pick soft fruit during harvests can set farmers back around $250,000. On top of this initial outlay are costs associated with maintenance, training and insurance.

Another dilemma for organisations deciding whether to invest in new technology is the prospect of obsolescence. Why invest in a RIBA robot if there are rumours of a more sophisticated caring robot just around the corner? And why plough money into a fraud detection algorithm with 95 percent accuracy if there are expectations one will soon emerge with 99 percent accuracy?

These risks are lowered with software as a service (SaaS) agreements, whereby AI software can be licensed on a subscription basis. This ‘plug and play’ model is also appearing in robotics, with both Saviotic’s hotel concierge robots and Starship Technology’s delivery droids operating on rental rather than purchase models. But the prospect of being tied into an expensive contract can still be off-putting for some businesses and public services.

Organisations must also reflect on their wider business strategy, and weigh up the cost of a new technology versus the savings that could be made on staff and efficiency improvements (e.g. fewer accidents and fewer interruptions in production runs). For organisations employing well-paid and highly-skilled staff, there may be an obvious case for buying in machine alternatives (one reason why the financial industry is bracing itself for significant disruption). However, for organisations operating in low-skilled and low-paid sectors, including care homes, restaurants, bars and some factories, it will continue to be cheaper to employ people.

Organisations that expect to change their business model in the foreseeable future are also likely to have reservations about purchasing new machines. For example, a care home provider thinking of pivoting into domiciliary care will be wary of investing in robotic hoists and other machines if these cannot be used in a different setting.

Automation and the National Living Wage

The introduction of the National Living Wage (NLW) in 2016 was a welcome development for low paid workers, yet its effects on tech adoption remain to be seen. The NLW is currently set at £7.50 for over 25s and is expected by the Office for Budget Responsibility to reach £8.31 by 2020. Rising staff costs could encourage employers to seek out productivity gains through automation. Alternatively, employers may choose to swallow the extra expense via a reduction in profits, or pass on the costs to consumers in the form of higher prices. A survey by the Resolution Foundation in 2015 found that 30 percent of employers affected by the NLW would seek to raise productivity in response, with 20 percent opting to take lower profits, and 15 percent planning to reduce the number of their employees or slow down recruitment. This early analysis suggests that minimum wage rises need not necessarily lead to job losses, and may even spur innovation.

Consumer preferences

In his book, Humans are Underrated, the US journalist Geoff Colvin urges observers of AI and robotics to spend less time analysing what these technologies are capable of, and more time questioning what we want them to do. He asks:

“what are the activities that we as humans, driven by our nature or realities of daily life, insist be performed by other humans?”

Time will tell where consumer preferences lie, but there are almost certain to be cultural ‘no-go zones’ where the use of AI and robotics is deemed publically unacceptable. One might expect most people to be unfazed by a fully automated financial advisory service, but less relaxed about receiving a life or death health diagnosis from an AI interface.

A recent study by Nesta, however, reveals a more complicated picture. Their survey of the UK public found that more people would be willing to sit in a driverless car where ‘you do not need to use the steering wheel’ (36 percent) than to get rid of cash completely so all payments would be through digital currencies (28 percent).

In another sign that people prefer the human touch in financial transactions, the robo-advisory service Betterment recently began offering the services of human financial advisors for the first time.

Overall, the UK public appears to be less sanguine about the use of new technology than citizens in other countries.

Nesta ranks the UK 5th on its ‘openness’ to new technology among the 9 European countries it surveyed, above France and Germany but below Spain and Italy. People under 35, university graduates and Londoners tend to score higher on the openness scale — but not by a considerable amount.

One reason for these cross-country differences may be cultural sensibilities and even religious associations with technology. In Japan, for example, the main religion is Shintoism, a form of ‘animism’ that believes inanimate objects have spirits. This may explain the country’s deep rooted enthusiasm for robotics, and the zeal with which it has embraced machines in the use of a sensitive task like caring.

Beyond cultural and religious dispositions, there may be psychological barriers that hamper take up of AI and robotics. Fascinating research from academics at the University of Oxford and Cornell University suggests that humans are hardwired to distrust any entity that makes moral decisions through rigid calculations of costs and benefits, as machines do. This way of making decisions — called ‘consequentialism’ — sits in contrast to rules-based decision-making, in which certain actions are deemed “just wrong”, even if they bring about better consequences for all. In the classic philosophical dilemma of a trolley cart hurtling down a track, the decision to refuse to push someone in its path to save 5 others would be a case of a rule-based approach in action. Thinking of what this means for automation, the researchers write:

“It may not be enough for us that machines make the right judgements — even the ideal judgements. We want those judgements to be made as a result of the same psychological processes that cause us to make them… Until technology is capable of this feat, any attempts at making ethically autonomous machines will likely be met with suspicion”.

Why are we reluctant to trust robots,The Guardian

Regulatory concerns

The third obstacle to AI and robotic adoption is regulation. Earlier this year, the State of California proposed legislative changes that would allow autonomous vehicles to carry passengers without a licensed driver on board, while the US Food and Drug Administration gave the green light to the sale of a ‘black box’ deep learning algorithm to be used in healthcare. There are signs that UK regulators are also opening up. The Department for Transport (DfT) has drafted a code of practice for automated vehicles, and the government in partnership with local authorities has given the green light for numerous trials across the country, with a view to the UK being “at the forefront” of this industry. Elsewhere, the Financial Conduct Authority (FCA) has created a ‘regulatory sandbox’ to allow the trial of new technologies including AI within financial service products and start-ups.

Yet the regulatory system as a whole is a large and slow moving juggernaut. While the FCA appears contemplative about the use of AI in financial advice, it may be less comfortable to see algorithms used in offering insurance products, including underwriting. And whereas the DfT may be enthusiastic about the prospect of autonomous vehicles roving the nation’s streets, it has already insisted that a human remains behind the wheel at all times.

The EU’s new General Data Protection Regulation (GDPR), which comes into force in 2018, brings in new rules that may derail or slow down the spread of AI. This includes a new right for people to receive meaningful information about the logic involved in, and significance and envisaged consequences of, automated decision-making systems that will affect them. New machine learning approaches are likely to jar with this ruling.

AI and robotics will also throw up a host of ethical and legal dilemmas that regulators will have to grapple with, which could in turn stifle or even halt their take-up. Among them are:

  • Discrimination — Equipped with AI systems, organisations will have greater precision in predicting people’s behaviours and the risks they face. This could lead to certain groups being denied access to goods, services and employment opportunities. Insurance companies, for example, may one day be able to use advanced algorithms to determine the likelihood of prospective customers acquiring a disease, making them uninsurable. We have also seen how employers might draw on biased algorithms in recruitment.
  • Privacy — AI and robotic systems rely on harvesting enormous amounts of data to produce accurate outcomes. This is particularly true of machine learning and deep learning approaches to AI, which use reams of data to train algorithms. But will our privacy be compromised in the process? The use of AI in healthcare diagnostics, for instance, could require public services to open up patient data as a training asset to private companies.
  • Agency — Agency may take on a different meaning in a world where technology can understand in depth how to influence people’s behaviour. There are already concerns in Silicon Valley that sophisticated algorithms are being used to hook consumers on apps and other platforms, as documented by Tristan Harris and his movement, Time Well Spent. More troublingly, it is suspected that AI was used to shape voting patterns in the EU referendum through voter profiling and targeted adverts.
  • Authenticity — The spread of ‘lifelike’ AI and robotic systems opens up questions about the sanctity of human relationships. How connected should humans become to machines? And how do we prevent people from being duped into believing that a machine, say an AI chatbot, is a real person? Questions of authenticity are particularly pertinent to the caring industry. A seal-like robot called Paro has proven effective in calming patients with dementia, but some have voiced concern about outsourcing a sensitive task like care to a machine without a conscience.

Of course, what matters is not just the action or judgement a machine makes but the context in which it operates, and how the information it supplies is used. It is unlikely that people or regulators will ever be comfortable with a machine acting as sole arbiter in high stakes court cases, but they could add an extra layer of insight to a human judge or magistrate. Likewise, handing over the job of caring for our loved ones to robots is an alarming prospect in its own right.

However, their deployment is more likely to be acceptable to society if they are paired with humans, or indeed if we are reminded of the shortage of care workers and the limited time they have available to be truly attentive to vulnerable people. Context and the process by which machines are integrated therefore matter greatly.

There is, however, a more clear-cut risk that will phase regulators: cyber-attacks. Artificial intelligence and robotics are susceptible to malicious hacks and could be overridden with damaging results. The autonomous vehicle industry was given a wake-up call in 2015 when a Jeep Cherokee was paralysed on a highway after two computer scientists hacked its cruise control system. Elsewhere, Microsoft’s Twitter bot, Tay, was abused by internet trolls who trained it to regurgitate racist, sexist and homophobic content. AI and robotics can also be used as tools themselves to penetrate systems, mislead people and scale up fraudulent transactions.

These concerns came out strongly in our YouGov poll, where 76 percent of business leaders said the introduction of new technologies tends to lead to increased cyber security risks, posing a significant threat to businesses (see below). Regulators will undoubtedly keep a close eye on these dangers and consider when and where to intervene.

Proportion of business leaders who believe the introduction of new technologies tends to lead to increased cyber security risks, which pose a significant threat to businesses:

Source: RSA/YouGov survey of 1,111 UK business leaders (Fieldwork conducted 10th-18th April 2017)

Operational integration

Even when the technology is cheap, consumers are happy to embrace it, and regulators have given their approval, organisations can themselves face internal difficulties in integrating AI and robotics. Occasionally there are physical constraints to contend with. A manufacturer may wish to install a new type of machine, but not have the space to do so. Free moving robots may be stymied by uneven surfaces, platforms, steps and other physical obstacles. In one bizarre case, a security robot in an office building ‘drowned’ itself when it fell into a water foundation. These issues are particularly problematic for domestic robots that operate in people’s homes, where the owners will be less willing to reconfigure and refurbish their properties than businesses and public service providers. There are also risks associated with vandalism and theft. While the most advanced robots have been trained to operate in complex environments, they may not be prepared for confrontation.

On top of physical constraints are issues of workforce readiness. A recent survey by Deloitte found that just 15 percent of global executives believe they are prepared to have a workforce “with people, robots and AI working side by side”.[2] Staff need training and encouragement to use new technology, while middle managers have to buy into its value and understand what it is capable of. Workers may initially be reluctant to embrace AI and robotics for fear of being usurped or disrupted more generally. A public service health chief we spoke with recalled how his decision to deploy automated transcription software to speed up the write-up of doctors’ notes was initially met with resistance from secretaries, until they saw the potential for it to relieve them of a thankless task. Unions may also seek to slow down the adoption of AI and robotics in a bid to protect their members. ASLEF, the UK’s trade union for train drivers, warned that the introduction of driverless trains on the London underground could lead to “all-out war” with Transport for London.

Finally, the integration of AI and robotics may be stalled as business models are updated and supply chains are reconfigured. The delivery firm Hermes is currently trialling the use of the Starship Technologies delivery robots for parcel deliveries in 15 minute time slots. However, before this was possible, the company had to develop a new time slot booking application, market this offer to customers, and negotiate with London councils about the use of autonomous robots on local footpaths. Sustainable business models will also be dependent on appropriate insurance products emerging, without which businesses and public services may be reluctant to make investments. Insurance providers may themselves be hesitant to devise new products until it is legally established where accountability for wrongdoing lies. For example, if a cancer-detecting algorithm were to misdiagnose a patient, would culpability lie with the creator of the software, the health service deploying it, or the provider of the data on which it is trained?

In the next chapter we reflect on how these four major barriers to adoption might be lifted, with a view to achieving ‘automation on our own terms’. As we will make clear, the slow integration of AI and robotics in our economy should not be viewed as a welcome reprieve from disruptive forces, but rather as a hindrance in our attempt to realise a better world of work.

To find out more about our research, please contact Benedict Dellot

For full references and bibliography please visit the RSA website to download the full report

--

--

The RSA

We are the RSA. The royal society for arts, manufactures and commerce. We unite people and ideas to resolve the challenges of our time.