What is the difference between AI & robotics?

We lift the lid on two overlapping but distinct technologies

The RSA
15 min readSep 17, 2017

By Benedict Dellot and Fabian Wallace-Stephens

Follow Benedict and Fabian on Twitter @BenedictDel @Fabian_ws

This article is an extract from the RSA report The Age of Automation: Artificial intelligence, robotics and the future of low-skilled work

Artificial intelligence and robotic systems can be found in every corner of our economy. Example uses include:

  • Cancer detection — A deep learning algorithm developed by Stanford University is capable of diagnosing cancerous skin lesions as accurately as a dermatologist
  • Media reports — The Associated Press recently adopted machine learning software that can produce 3,000 corporate earnings reports every quarter
  • Construction — A robot called the Semi-Automated Mason (SAM) can lay up to 1,200 bricks a day, compared with the 300 to 500 a human bricklayer is capable of
  • Utility repairs — HiBot USA uses a combination of robotics and AI to predict the likelihood of pipe failures, based on factors such as surrounding soil type and land topography
  • Parcel delivery — Starship Technologies has developed a wheeled robot that can deliver parcels autonomously, and is now being trialled with logistics companies worldwide
  • Patient care — Japan’s Tokai Rubber Industries has developed the RIBA robot, which is being used in health care to lift and move humans up to 175 pounds in weight
  • Fraud detection — Fraugster is a startup that uses machine learning algorithms to spot fraudulent behaviour in financial transactions in as little as 15 milliseconds
  • Housing inspections — Technology company ASI Data Science has created algorithms to predict where unlicensed landlords operate, helping to prevent the exploitation of vulnerable tenants
  • Online shopping — Many retailers use machine learning algorithms to learn customer preferences and offer personalised recommendations

The breadth of applications for AI and robotics is clearly vast. But before we can ascertain what these machines will mean for the workforce, first we need a better understanding of how they function and how they are likely to evolve in the future. What does ‘artificial intelligence’ mean in practical terms? How does it relate to other concepts such as ‘deep learning’ and ‘machine learning’? What constitutes a ‘robot’? And how significant are innovations such as ‘cloud robotics’ and ‘serpentine robotics’?

Here we attempt to lift the lid on these machines, beginning by separating out artificial intelligence from robotics, which are two overlapping but distinct technologies.

Introducing artificial intelligence

Artificial intelligence is complicated to define, but generally refers to tasks performed by computer software that would otherwise require human intelligence. And by ‘software’ we mean a bundle of algorithms that follow a series of steps to arrive at an action or conclusion.

There are two broad types of artificial intelligence: general AI and narrow AI. General AI refers to holistic systems that have equal or greater intelligence to humans, and which can complete all manner of tasks, from playing chess to greeting customers in a shop to creating works of art. Aside from the most ardent of optimists like sci-fi writer Vernor Vinge and entrepreneur Elon Musk, most experts believe we are several decades away from seeing machines that can pass for humans. The fundamental block is that general AI demands an understanding of how ‘intelligence’ works, yet this is an enormous puzzle that will keep research labs occupied for some time to come.

Considerably more progress has been made in the second field of narrow AI, which is sometimes referred to as ‘weak AI’. These are systems that can perform discrete tasks within strict parameters, for example:

  • Image recognition — used in self-service desks at passport control, and automatic name tagging on Facebook photos
  • Natural language processing — used in voice recognition for AI assistants like Amazon Echo and Google Home
  • Information retrieval — used in search engines
  • Reasoning using logic or evidence — used in mortgage underwriting or determining the likelihood of fraud

These tasks can in turn be grouped into three categories of intelligence: sensing, reasoning and communicating. The technology journalist Kris Hammond uses the example of voice assistants like Apple’s Siri and Google’s Assistant to demonstrate how AI systems often combine different functions: first they deploy speech recognition algorithms to capture what people are asking (‘sensing’), then use natural language processing to make sense of what the string of words mean and identify an answer (‘reasoning’), and finally relay this answer to users using natural language generation (‘communicating’).

AI Winters that came and went

But how did artificial intelligence systems get to this point? The concept of thinking machines has existed in serious form since Alan Turing and his contemporaries developed the first sophisticated computers in the 1940s. The Dartmouth College convention of 1956 is often cited as the landmark moment when computer scientists came together to pursue artificial intelligence as a field in its own right, powered by leading thinkers such as Marvin Minksy.

Despite early enthusiasm and significant funding, however, initial progress in developing AI was disappointingly slow.

DARPA, which had pumped millions into university departments during the 1960s, became particularly frustrated at the lack of headway in machine translation, which it had hoped would turbocharge its counter espionage capabilities. Meanwhile in the UK, a 1973 government commission on AI led by James Lighthill raised grave doubts that the research field was going to evolve at anything but an incremental pace. The result was that government funding in both countries — and across the developed world — was drastically curtailed.

The rise and fall of AI in the consciousness of policymakers and the public continued throughout the 20th century. A new development would trigger a wave of enthusiasm and a surge in funding, only for interest to plunge and resources to dry up as promised innovations failed to materialise.

As many as four ‘AI Winters’ can be identified since the genesis of the movement in the 1950s. Trigger happy funders and sensational media reporting were partly to blame for inflating the AI bubbles, but so too was the research community’s overzealous predictions. Even the subdued Marvin Minsky was caught claiming in 1970 that “[within] three to eight years we will have a machine with the general intelligence of an average human being”.

An overview of artificial intelligence:

The green shoots of an AI spring

Progress was slow partly because of the approaches researchers were using to develop software. Most AI applications of the 20th century took the form of expert systems, which are based on a series of painstakingly developed ‘if-then’ rules that can guide basic decision making (picture a decision-tree with multiple branches). While expert systems are useful for dealing with a contained task — say, processing cash withdrawals under the bonnet of an ATM — they struggle with requests that cannot easily be codified in rules. For example, it is very difficult to write rules that determine whether a human-like object is a mannequin or a real person, or whether a dark pattern on an MRI scan is a tumour or benign tissue. These instead rely on ‘tacit knowledge’ that is hard to articulate.

It is only when new approaches to artificial intelligence were deployed that significant breakthroughs were made — not least thanks to machine learning techniques. Rather than having to write rules from scratch, machine learning works by ‘training’ algorithms using existing data that is often labelled (e.g. images denoted as mannequins or humans, and MRI scans labelled as malign or benign tumours). Working backwards, the algorithms then detect a pattern and create a generalised rule to make sense of future inputs. Machine learning algorithms are now being used in multiple domains, from detecting fraudulent transactions in banking to helping HR teams screen CV applications during employee recruitment.

While machine learning has been powering achievements in AI for the last decade, the spotlight in the last two years has turned to one of its subdomains: deep learning. Deep learning systems are made of ‘artificial neural networks’ that have multiple layers, with each layer given the task of making sense of a different pattern in images, sounds or texts. A first layer may recognise primitive patterns, such as the outline of an object in an image, whereas a second layer may be used to identify a band of colours in that image. Data is fed through multiple layers until the point where the system can cluster patterns into distinct categories, say of objects or words. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

Other important approaches to AI include supervised learning, reinforcement learning and transfer learning:

  • Supervised learning — Algorithms can be trained at their outset in one of two ways: through supervised or unsupervised learning. Supervised learning means that algorithms are fed labelled data, which they draw patterns from to come up with a generalised rule to make sense of future data. Most machine learning and deep learning algorithms are trained using a supervised process. Unsupervised learning is when an algorithm is fed unlabelled data and spots patterns of its own accord. Example uses include population segmentations used by marketing companies, and some cybersecurity software.
  • Reinforcement learning — Whereas some algorithms are written or trained only once, reinforcement learning uses positive feedback mechanisms to continuously tweak and improve algorithms as they are used. Recommendation systems in online retail are an example of reinforcement learning in action. Every time a consumer purchases a product — a book, a record or an item of clothing — an algorithm automatically adjusts to factor in these behaviours when making future recommendations.
  • Transfer learning — Transfer learning involves taking an algorithm that was developed in one domain and modifying it for use in another, without having to start from scratch and source huge reams of original and labelled data. Transfer learning has been used to repurpose algorithms that were originally deployed to read print media to subsequently read text on social media.

To clarify, the above approaches to AI are not necessarily mutually exclusive, and can often be used in combination.

Introducing robotics

What about robotics? As with artificial intelligence, there is not a common definition of a robot, but for the purposes of this report we deem them to be physical machines that move within an environment with a degree of autonomy.

While tractors, construction diggers and sewing machines have moving parts that complete manual tasks, they require human oversight for long periods (if not continuously) and therefore do not fall under our definition of a robot.

In contrast, picking and packing machines in warehouses and ‘care bots’ that lift and carry patients both fulfil tasks with partial autonomy, and would therefore be classed as robots by our reckoning.

The word ‘robot’ first emerged in a 1921 science fiction play written by Karel Capek, which told the story of a society that produced human clones to be its slaves, only for the robots to overthrow their masters. Robotics remained the preserve of science fiction until the 1950s, when the first industrial robotics company called Unimation was formed. It invented a ground-breaking 4,000-pound robot arm that could pick up and drop down items based on pre-programmed commands, making it ideal for moving heavy and hot items in factories. The Unimate robot had its first outing at General Motors in 1961, where it was used to transport hot pieces of die-cast metal and weld them to car body parts.

Not long afterwards in 1969, pioneering roboticist Victor Scheinmann developed the Stanford Arm, the first electrically powered and articulated robot arm. It was seen as a breakthrough in robotics because it operated on 6 axes, giving it greater freedom of movement than previous single or double axis machines. The Stanford Arm marked the beginning of the articulated robot revolution, which transformed assembly lines in manufacturing and spurred the launch of several commercial robotics companies including Kuka and ABB Robotics. Over the years, articulated robots have taken on evermore functions, from welding steel, to assembling cars, to adding finishes to white goods. The International Federation of Robotics puts the current number of industrial robots at 1.6 million globally (note this also includes other robotic types including cartesian robots).

Breaking free of their cages

The world of robotics remained focused on articulated arms for most of the 20th century. Yet just as with the field of AI, the picture began to change at the turn of the millennium. Honda’s ASIMO robot was unveiled in 2000 as one of the first humanoid machines that could walk on two legs, recognise gestures and respond to questions. Three years later, KIVA Systems (now Amazon Robotics) was established to supply mobile robots that could shuttle goods and pallets within complex distribution warehouses. The early 2000s was also the period when autonomous vehicles moved from lab testing to road trials. Particularly symbolic was DARPA’s Grand Challenge of 2004, a first-of-its-kind prize that offered a £1m award to anyone who could navigate a 142 mile course with an autonomous vehicle.

While varying in their functions, size and setting, each of these robots have one characteristic in common: mobility. Whereas the articulated robots of the 20th century were firmly rooted in one place and often enclosed behind screens, the robots of the 21st century have broken free of their cages. One driving factor has been the symbiosis of AI and robotics, with sophisticated software giving physical machines the wherewithal to deal with unanticipated surroundings and events. Reinforcement learning, for example, means that robots can now mimic and learn from human coworkers. Furthermore, storing data in the cloud means robots can share learning and pool experiences with other robots in a network, be they retail humanoid robots such as Pepper or the autonomous driving cars of Waymo.

Honda’s Asimo robot, one of the first humanoid machines that could walk on two legs. Flickr / Brian Warren Creative Commons

Advances in robotics can also be traced to innovations in hardware. Improvements in sensors are giving robots the visual awareness necessary to navigate unstructured environments. These sensor capabilities have been matched by a rich and growing pool of data on the physical world, including new 3D image datasets such as ScanNet and 3D maps of streetscapes gathered by fleets of cars in real-time. Materials science has also come on leaps and bounds. Better materials such as silicone and spider silk make for sharper looks, while ‘mechanical hairs’ made of piezoelectric transistors are as sensitive as human skin. Added to this are improvements in hydraulic pumps, which offer minimal friction and allow for remarkable levels of control.

The result is that robots are no longer confined to factories but can be seen roaming settings as diverse as hospitals wards, shop floors and city streets. Yet even in factories, robots continue to evolve. The latest machines, dubbed ‘co-bots’, are designed to work in tandem with human workers, for example by picking components out of bins, removing defective items from product lines, and fulfilling simple tasks such as screwing, gluing and soldering. They are also extremely simple to re-programme, making them attractive for businesses with smaller batch runs, and have torque sensors which immobilise the machine in the event of human contact. Research by MIT undertaken in partnership with BMW found that robot-human teams were 85 percent more productive than either working alone.

An overview of robotics:

So what is the overall picture in 2017? A look at the landscape of robotics suggests there are five main types of physical machine now in existence:

  • Articulated robots — Stationary robots whose arms have at least three rotary joints, and which are typically found in industrial settings. Co-bots are the latest iteration of articulated robots. Examples include Baxter, a reprogrammable robot that is ‘trained’ simply by moving its arms in the desired motion, rather than via programming.
  • Mobile robots — Wheeled or tracked robots that can shuttle goods and people from one destination to the next. Self-driving cars are the pinnacle of mobile robot capability, while Tesla is planning to launch trials for autonomous long-haul trucks. Mobile robots can undertake more specific functions, including Amazon Robotics’ small orange machines that move pallets in warehouses, and Starship Technologies’ wheeled droids that can deliver parcels in urban areas.
  • Humanoid robots — Robots that have a physical resemblance to humans and which seek to mimic our abilities. Softbank claims its new Pepper robot is the first to be able to recognise human emotions and adapt its behaviour accordingly, while RIKEN’s Robobear has been engineered to lift and carry patients in healthcare. Other humanoid robots have taken on therapeutic functions, such as NAO, which uses simple gestures and games to support the development of autistic children.
  • Prosthetic robots — Robots that can be worn or handled to give people greater strength, including disabled people or workers performing hazardous jobs. The Hulc is a hydraulic exoskeleton that supports soldiers carrying heavy weights on expeditions. Another exoskeleton, suitX, gives paraplegics the strength to walk. Although these machines may not appear ‘autonomous’ (recalling our earlier definition of robots), under the bonnet many have sophisticated software to sensitively gauge and adjust the level of assistance wearers should receive.
  • Serpentine robots — Snake-like robots made up of multiple segments and joints that can move with extreme dexterity. Because of their ability to traverse difficult terrains and move through confined spaces, serpentine robots have found uses in industrial inspection and search and rescue missions. HiBot USA has developed a pipe inspection robot that can glide through decades-old piping to assess the extent of deterioration and to determine, in concert with AI software, whether a replacement is necessary.

What does the future hold?

It is impossible to predict how these two technologies — artificial intelligence and robotics — will develop over the coming years and decades. Deep learning algorithms may hit an impasse in their capabilities, while humanoid robots could turn out to be a flight of fancy. Some have already suggested that an AI bubble is inflating in Silicon Valley, with machines that are more artificial than intelligent. But what we can say with some certainty is that these technologies will continue to progress in one way or another, as they have done since their genesis in the 1940s and 50s. Several factors lead us to this conclusion:

  • Computing power — Since the 1970s, the number of transistors that can fit into the same space on computer chips has doubled every two years — a rule known as Moore’s Law. As computing power continues to grow, including through the recent introduction of nanometer transistors, it will open up pathways for more sophisticated AI and robotic systems. While there are indications Moore’s Law may be waning, engineers believe considerable computer power gains are still to be made in the improvement of chip design and by creating chips especially for machine and deep learning algorithms.
  • Data capture and storage — Data is the raw material that fuels the engines of AI and robotic systems. Thanks to the advent of the internet, the digitalisation of records and files, and the boom in social media communication, the global pool of available data that machines can train on is colossal. 2.5 exabytes of data are produced everyday, the equivalent of 530,000,000 millions songs and 250,000 US libraries of congress. The world’s stock of data is doubling in size every year, partly due to the spread of internet-connected devices. One estimate suggests the number of IP-enabled sensors worldwide will reach 50bn by 2020.
  • Common infrastructure — It was once the case that every research lab and tech company would develop its own proprietary hardware and software. The picture is very different today, with common infrastructure emerging that means robotic and AI technology need not be created from scratch. For example, open source robotic operating systems such as ROS and BrainOS allow developers to experiment with robotics at low cost, bringing down the barriers to entry. Google’s TensorFlow, meanwhile, is an open source library of machine learning code that enables users to easily incorporate AI features like speech recognition and natural language processing into their software programmes.
  • Research investment — A fourth driving factor is the large amount of investment flowing into research and development. In 2015, the U.S. Government’s investment in unclassified R&D in AI-related technologies was approximately $1.1 billion. The EU has set up a public private partnership to strengthen Europe’s robotics industry, with $700m of public funding. The number of higher education institutions with AI and robotics departments is also expanding. There are now 100 departments in Chinese universities that specialise in automation, while there are approximately 34 UK universities offering courses in AI. Investment is also very active in the private sector, with as many as 85 AI venture capital funds in operation.

This article has summarised the key developments in artificial intelligence and robotics, and highlighted how these tools are being put use to use in fields as diverse as healthcare, finance, hospitality and utility repair. But what impact will these technologies have on workers? Will the likes of self-driving cars and picking and packing robots lead to huge job losses, or are these fears unfounded?

We investigate the ramifications of these technologies in more detail in another article - How will automation change the nature of work?

To find out more about our research, please contact Benedict Dellot

For full references and bibliography please visit the RSA website to download the full report

--

--

The RSA

We are the RSA. The royal society for arts, manufactures and commerce. We unite people and ideas to resolve the challenges of our time.