Physical Work Disruption

Nathan Leigh
37 min readAug 15, 2015

--

Robots are approaching a technological inflection point that will let them operate more reliably in dynamic, unscripted environments by taking on tasks that once relied on humans’ manual dexterity and good eyesight and cognitive thinking. Service robots are opening new contexts for productivity gains beyond what industrial robots have done. The capabilities of robots will increase and their price will drops over the next 20 years, resulting in disruption for many occupations.

A PWC report on service robots found that advancements in three technical domains are addressing the fundamental challenges for robotics;

  • Cognition: The robot’s ability to perceive, understand, plan, and navigate in the real world. — Improved cognitive ability means robots can work in diverse, dynamic, and complex environments autonomously.
  • Manipulation: Precise control and dexterity for manipulating objects in the environment.Significant improvement in manipulation means robots can take on a greater diversity of tasks and use cases.
  • Interaction: The robot’s ability to learn from and collaborate with humans. — Improved interaction including support for verbal and nonverbal communications, observing and copying human behavior, and learning from experiences means robots will increasingly be able to work safely alongside humans.

This opportunity with robots will be like combining the impact of electricity, communications, and the Internet,” predicts Eugene Izhikevich, CEO of Brain Corporation.

A combination of factors — inexpensive computing hardware available in small sizes; the ability to sense, capture, and store large amounts of data; and sophisticated algorithms to process and understand the data in real time — make learning-based approaches a good fit for robots in real-world environments.

Service robots: The next big productivity platform “Rather than programmers explicitly instructing the robot what to do, the robot will continuously learn during training and working, and it will adapt its behavior in new situations based on past experiences. Autonomous learning implies that the robot will learn what works and what does not, without constant supervision, although the robot occasionally might receive feedback from humans. This ability to learn autonomously from experience, just as humans do, will be a game changer in the dynamic environments associated with robot- provided services.

In many ways, service robots are the first technology that can apply cognition to physical tasks such as manipulation and movement in varied and diverse domains. This capability opens up a new spectrum of potential productivity advances. Currently, they offer enhanced productivity when used in cages in factories. In the foreseeable future, they will be more prevalent in warehouses, hospitals, hotels, and many new contexts, assuming certain challenges are met.

Service robots are in the early stages of a long development cycle. Incremental, evolutionary gains can be expected in the next three to five years as the field gets past several technological challenges. After that, rapid gains can be expected, especially for service robots in real-world environments, as new robot models take advantage of increasingly powerful and standard modular platforms combined with increasingly powerful autonomous learning capabilities.

Japan’s revised revitalization strategy, the Robot Revolution Initiative, is to double the use of robotics in manufacturing, and increase by twenty-fold robotics use in other sectors, including service industries. In part this is an effort to deal with the country’s declining birthrate and aging population by providing robotic helpmates in industries such as healthcare, agriculture, and the inspection and repair of the country’s infrastructure.”

Cognition

As much of the muscle work has been replaced by machines, the brain work is starting to face a similar fate by “intelligent” software bots. Cognitive computing like IBM’s Watson is giving computers the ability to “think”, which has the potential to disrupt a wide range of occupations. In many ways, service robots are the first technology that can apply cognition to physical tasks such as manipulation and movement in varied and diverse domains. This capability opens up a new spectrum of potential productivity advances.

Mark Zuckerberg — “I think 10 years from now computers will be better than humans at reading, listening, talking, and other things. So we are developing this.”

Amelia can digest an oil-well centrifugal-pump manual in 30 seconds — and give instructions for repairs, do the job of a call-centre operator, a mortgage or insurance agent, even a medical assistant, with virtually no human help. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language. During trials, IPsoft found Amelia was able to progress from answering very few queries independently to 64% of queries within two months.

Analyst house Gartner predicts that by 2017, autonomic and cognitive platforms such as Amelia will drive down the cost of services by 60%. Amelia has been trialed within a number of Fortune 1000 companies, working in areas such as IT help desks and financial trading operations support, and it’s in the early stages of being rolled out at various organisations.

Kenneth Brant, research director at Gartner — “Many new combinations of technology — from intelligent software agents, expert systems and virtual reality assistants to software systems embedded in smart products and revolutionary new forms of robotics — will emerge and have great impacts in this decade.

We won’t need to develop a full-functioning artificial brain by 2020 for smart machines to have radically changed our business models, workforce, cost structure and competitiveness. Most business and thought leaders underestimate the potential of smart machines to take over millions of middle-class jobs in the coming decades, Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”

Frank Lansink, EU CEO, IPsoft- “Enterprises have always looked for ways reduce costs while maintaining quality of service. In the past, companies turned to offshoring and outsourcing, with varying degrees of success. In many cases quality was sacrificed.

Today, those cost-cutting measures are giving way to the future of the enterprise: robotics. Decades ago, robotics transformed the manufacturing industry, and Artificial Intelligence is poised to make a similar impact on business process outsourcing. The potential is immense. Everything that requires human knowledge work to make a decision is a candidate for autonomics.”

With software gaining the capability to perform low level knowledge abilities some of the machine learning algorithms can be applied to robots so they can perform more cognitive demanding tasks. UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences says “We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”

“The technology is still providing more accuracy and power in every area of AI where it has been applied. New ideas are needed about how to apply it to language processing, but the still-small field is expanding fast as companies and universities dedicate more people to it, that will accelerate progress.” says Yann LeCun. LeCun guesses that virtual helpers with a mastery of language unprecedented for software will be available in just two to five years. He expects that anyone who doubts deep learning’s ability to master language will be proved wrong even sooner. “There is the same phenomenon that we were observing just before 2012,” he says. “Things are starting to work, but the people doing more classical techniques are not convinced. Within a year or two it will be the end.

A recent paper from researchers at a Chinese university and Microsoft’s Beijing lab used a version of the vector technique to make software that outperforms the average human ability to answer IQ test questions on verbal reasoning(requiring an understanding of synonyms, antonyms, and analogies.) for the first time.

Computers may even begin to gain common sense one day.

Yann LeCun: I think a form of common sense could be acquired through the use of predictive unsupervised learning. For example, I might get the machine to watch lots of videos were objects are being thrown or dropped. The way I would train it would be to show it a piece of video, and then ask it, “What will happen next? What will the scene look like a second from now?” By training the system to predict what the world is going to be like a second, a minute, an hour, or a day from now, you can train it to acquire good representations of the world.

This will allow the machine to know about the constraints of the physical world, such as “Objects thrown in the air tend to fall down after a while,” or “A single object cannot be in two places at the same time,” or “An object is still present while it is occluded by another one.” Knowing the constraints of the world would enable a machine to “fill in the blanks” and predict the state of the world when being told a story containing a series of events.

Larry Page — And we started just looking at things like YouTube. Can we understand YouTube? But we actually ran machine learning on YouTube and it discovered cats, just by itself.Now, that’s an important concept. And we realized there’s really something here. If we can learn what cats are, that must be really important. So I think Deep Mind, what’s really amazing about Deep Mind is that it can actually — they’re learning things in this unsupervised way. They started with video games, and really just, maybe I can show the video, just playing video games, and learning how to do that automatically.

The amazing thing about this is this is, I mean, obviously, these are old games, but the system just sees what you see, the pixels, and it has the controls and it has the score, and it’s learned to play all of these games, same program. It’s learned to play all of these games with superhuman performance. We’ve not been able to do things like this with computers before. And maybe I’ll just narrate this one quickly. This is boxing, and it figures out it can sort of pin the opponent down. The computer’s on the left, and it’s just racking up points. So imagine if this kind of intelligence were thrown at your schedule, or your information needs, or things like that. We’re really just at the beginning of that, and that’s what I’m really excited about.

Jeremy Howard in 2014— “I think that on the software side, in 3 to 5 years we could be at a point where robots could operate in fairly unstructured environments, in a fairly autonomous way — based on the recent progress in machine vision and reinforcement learning. Job sectors relying primarily on perception will be the first and hardest hit, since perception is what computers are most rapidly improving at thanks to deep learning”

Software is even showing the ability to be creative, IBM’s Watson has written a cookbook full of entirely new food recipes. Researchers at Cambridge have observed the process of evolution by natural selection at work in robots, by constructing a ‘mother’ robot that can design, build and test its own ‘children’, and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customisation, but we want to see robots that are capable of innovation and creativity.”

A new deep-learning algorithm enables a robot to operate a machine it has never seen before, by consulting the instruction manual and drawing on its experience with other machines that have similar controls. From the database the robot also learns to identify various kinds of controls by their shape rather than location, and to relate them to the various labels that might be used in the instructions. they have trained and tested their robot with 116 different appliances, including juice makers, lamps, a soda machine and even bathroom sinks.

Actions such as “turning a knob” can be transferred from one kind of knob to another. Robots may also learn to use trial and error with unfamiliar controls and have routines to recover from failure. The action model, Sung said, eventually will be available in the online RoboBrain database Saxena has created for robots everywhere to consult.

Service robots: The next big productivity platform — “Cloud-connected robots are becoming common, which means robots can take advantage of deep learning resources in the cloud, a phenomenon also called cloud robotics. Via the cloud, robots will increasingly share knowledge and experience just as people do. Keeping capabilities resident in the cloud means new robots can be put into service with little effort, thanks to the latest learning from robot training that preceded them.”

What robots learn, experience and master can be added to the cloud where it can be used by other robots, creating a huge collective brain that continually improves and accumulates robotics knowledge which can be shared instantly to all other robots.

Dr. S.K. Gupta — “Perhaps most exciting for the future of cloud computing in robotics is when one robot can impart something it perceives or learns instantaneously to other robots. This sharing could have a catalytic effect on the capabilities of robots, particularly in unstructured environments.”

On its own, each technology has the capacity to change business activity. Taken together, they have the potential to radically reshape society, businesses, the workforce and the economy. These technologies are likely to significantly boost efficiency while eliminating many historic jobs. Big Data is allowing non-routine tasks to become programmable. When there is sufficient data information and computing power available, machine learning can be applied, aiding the computerisation of more tasks.

Hod Lipson — “It all boils down to machine learning. Most of the automation will be driven by software that learns from its own experience, as it learns, it gets better. Not just that specific instance of the software gets better, but all instances learn from each other’s experiences. This compounding effect means that there is tremendous leverage.

In a relatively short while, the driverless car’s AI will have accumulated a billion hours of driving experience — more than a thousand human lifetimes. That’s difficult to beat. And it’s the same situation for medical diagnostics, strategic investment, farming, pharmacy. The AI doctor that sees patients will have quickly seen millions of patients and encounter almost all possible types of problems — more than even the most experienced doctor will see in her lifetime.”

Improvements in cheap sensor technology will make the The Internet of Things(IoT) usage abundant in the near future plus “The Cloud” will become faster, more reliable and accessible from anywhere by any device. 5G is considered key to the IoT and is predicted to arrive in the US in 2020. Download speeds should increase from today’s 4G peak of 150 Mbps to at least 10 Gbps and the response time will also drop from 15 to 25 milliseconds to 1 millisecond with 5G. The Internet will be ubiquitous.

The IoT is likely to have an application in, or be used by, every vertical segment in the economy. Companies using IoT technologies increased revenue by 16% in 2014, 40% of enterprises are growing their services businesses with IoT Initiatives. IoT spend will increase by 20% to $103m in 2018.

Analyst firm Gartner predicts the number of networked devices will skyrocket from about 5 billion in 2015 to 25 billion by 2020. All those sensors will be producing mountains of data. This will be a major source of information driving Big Data which can then be analyzed by AI to make data driven decisions to plan, manage, research, model, simulate, predict, optimize and execute business processes automatically.

Cory Doctorow — General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”

Coupled with declining costs and expanding capabilities, billions of sensors will create entirely new opportunities to computerise and optimise routine work, and for robots to understand their environment better and faster. The IoT technology will connect and embed intelligence in billions of objects and devices all around the world. This will lead to digitisation of the physical world, where then it becomes an exponential technology.

Mark Weiser — “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it, computers have already achieved that kind of ubiquity. In the future, robots will, too”

Vision

Visual robotic systems are quite different from old-style auto-making robots. These older systems required the part to be worked upon to be in a precise location at a specific time. The robot was blind and programmed to pick and process. Each step of the pick and process was hand programmed and quite detailed. Newer systems that use cameras and software to identify and locate parts are more flexible and enable product movement from step to step to be less rigid and precise; as a consequence, the movement system is less costly.

Robots have the vision capabilities to one day out perform human, they will be able to see further, faster and magnify images better than humans. They can use more senses than humans can to understand their surroundings in more detail, they can see even through certain objects.

Google’s DeepStereo algorithm is able to take two images of a scene and it will synthesize a third image from a different point of view. Computers will be able to see properties about objects that are undetectable to us. In this TED talk a camera phone was able to understand an object's underlying structure by its interaction with the air currents in the room or sound waves reflecting of it by analysing the changes of each pixel in slow motion.

Some robots are already using vision to improve their capabilities. If Baxter drops a screw while assembling something, it won't continue on with the task. It will realize it’s dropped a screw, and pick up another before carrying on. If YuMi picks up a screw, but is holding it in the wrong direction it’s 3D sensing and 3D cameras in its hands will realize and reorient the screw.

In Image Recognition, Microsoft’s vision algorithms excel at “fine-grained recognition,” which might rely on a category expertise beyond the average person but trivial for massive computer archives of data. “While humans can easily recognize these objects as a bird, a dog, and a flower, it is nontrivial for most humans to tell their species,” the researchers wrote. Microsoft’s suggested the deep learning tool could one day be used to give you nutritional information about a meal you’re about to eat, help diagnose skin disease, or distinguish between poisonous and edible plant life on a camping trip.

Fei-Fei Li — So far, we have just taught the computer to see objects. This is like a small child learning to utter a few nouns. It’s an incredible accomplishment, but it’s only the first step. Soon, another developmental milestone will be hit, and children begin to communicate in sentences. So instead of saying this is a cat in the picture, you already heard the little girl telling us this is a cat lying on a bed.

So to teach a computer to see a picture and generate sentences, the marriage between big data and machine learning algorithm has to take another step. Now, the computer has to learn from both pictures as well as natural language sentences generated by humans. Just like the brain integrates vision and language, we developed a model that connects parts of visual things like visual snippets with words and phrases in sentences.

About four months ago, we finally tied all this together and produced one of the first computer vision models that is capable of generating a human-like sentence when it sees a picture for the first time. Little by little, we’re giving sight to the machines. First, we teach them to see. Then, they help us to see better. For the first time, human eyes won’t be the only ones pondering and exploring our world.

Researchers have come up with a new way to teach robots how to use tools to cook simply by watching videos on YouTube. To train their model, researchers selected data from 88 YouTube videos of people cooking. From there, the researchers generated commands that a robot could then execute. “We believe this preliminary integrated system raises hope towards a fully intelligent robot for manipulation tasks that can automatically enrich its own knowledge resource by ‘watching’ recordings from the World Wide Web,” the researchers concluded.

Ian Goodfellow — “I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well.”

A startup called Clarifai is offering a service that uses deep learning to understand video. The company says its software can rapidly analyze video clips to recognize 10,000 different objects or types of scene. The software can analyze video faster than a human could watch it; in the demonstration, the 3.5 minute clip was processed in just 10 seconds.

Jeremy Howard — “We are seeing order-of-magnitude improvements( in computer vision) every few months, similar leaps are starting to appear in computers’ ability to understand written text. In five years’ time, a single computer could be hundreds or thousands of times better at that task than humans, combine it with other computers on a network, and the advantage becomes even more pronounced. Probably in your lifetime, certainly in your kids’ lifetime computers will be better than humans at all these things, and within five years after that, they will be 1,000 times better.”

Many jobs and tasks that people think can't be automated or outsourced and need to be performed locally such as a plumber, could be soon be disrupted using new technology. New technologies could improve telepresence and allow remote people to get together in the physical world with 5G internet, 4K displays and almost instant response times. I discuss more on remote working technologies in my article on STEM Disruption.

The new technologies mentioned include Augmented Reality which can overlay images to the real world. The first Microsoft demo of the HoloLens actually had someone helping another person fixing a sink. An expert can assist you but perhaps you are missing a tool, in the future you will be able to 3D print it at home or have it delivered by drone or driverless car within the hour to you.

One day some tasks may not even need a human expert at all, AI may be able to diagnose faults such as the plumbing example or what an electrician does? Could IBM watson’s ability to outperform even doctors mean that, within 20 years, advancing self service technologies using Watson could disrupt many other experts?

An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.

“The end goal is literally to get computers to read and write like you and I do, if computers could read, you would tell your personal reading assistant that you want to understand this, it would read the 200,000 documents that are out there on this topic and write for you a summary of the key issues and the arguments on both sides, along with citations back to the source material, it would replace search engines with questions and answers — a digital game changer.

NELL’s buddy NEIL, the Never Ending Image Learner.That system is like NELL except it looks for objects and relations among them. A year and a half ago, the projects had the idea to have NEIL and NELL talk. They’re sharing information and trying to teach each other.”

As explained above in the cognitive section of Amelia reading handbooks, (Amelia can digest an oil-well centrifugal-pump manual in 30 seconds — and give instructions for repairs, do the job of a call-centre operator, a mortgage or insurance agent, even a medical assistant, with virtually no human help), or have an AI watching millions hours of videos of people on the job or simulate all the repairs possible on a car, electricity box, boiler...etc. AI will have ingested all available knowledge of trade domains and have vision capabilities better than humans soon.

Within 20 years could more advanced versions of these AI(think of combining an advanced Siri with an advanced Watson) allow a person instead of calling a plumber/electrician, could you just use your phone camera to show the AI and speak the problem out loud to the AI which can then diagnose the problem. It thencould walk you through the instructions, transferring its millions of hours of accumulated knowledge, to solve the issue without the need for any human.

An advanced assistant could be combined using AR such as google glasses, showing you in 3D what to turn/what not to touch, highlighting objects, animating the assembly action, where to place items…etc. It can break down the complex problem to just simple physical ‘DIY’ actions which anyone can perform.

In 20 years I believe we will be able to say ‘Ok Google, why doesn't my car start(if we still own cars) and get led through a diagnostic process with our google glasses looking at our car, as it as it trawls billions of videos of repairs and every handbook written pluss reads every forum and details on the web about our car model, all instantly, to find the problem and guide you through the fix in real time, avoiding the need for a mechanic.

These models are obviously not possible for everything, especially in dangerous tasks, maybe the item just can’t be fixed without a local professional or without expensive tools, but the inspection, consultation and diagnosing the fault and fix may go down in cost by outsourcing it to a remote person or AI which may disrupt labor opportunities for the local expert.

An analysis from the Committee of Economic Development of Australia warns more than five million jobs, almost 40% of jobs, could disappear in the next 10 to 15 years because of technological advancements, it found that technology is set to wipe out 60 per cent of rural jobs.

The Future of Employment study by the Oxford researchers found these trade jobs have an 80% or higher probability of computerisation within the next 20 years.
- Welders, Cutters, Solderers, Brazers.
- Landscaping, Painting, Coating, and Decorating Workers.
- Electrical and Electronics Installers and Repairers.
- Construction Laborers, Roofers, Plasterers, Floor Sanders and Finishers, Carpet Installers.
- Cement Masons, Concrete Finishers, Stonemasons, Brickmasons and Blockmasons.
- Structural Iron and Steel Workers, Jewelers and Precious Stone and Metal Workers.
-Tailors, Dressmakers, Custom Sewers, Barbers, Butchers and Meat Cutters.

Carpenters, Glaziers and Home Appliance Repairers were also listed above 70%.

Scott PhoenixSince solving captchas, we’ve been working on integrating colour, lighting, texture, motion, motor actions and concept formation. The better we can make AI at perception, the better we will be able to make it at higher-level reasoning, which has applications in a lot of different fields.

We believe that perception is the gateway to higher reasoning. What seems like abstract thought is often stimulated by perceptual ability. Suppose I tell you “John has hammered a nail into the wall,” then ask you “is the nail horizontal or vertical?” It might seem at first glance like a logic problem, but actually your ability to answer it comes from the mental image you’ve imagined of John hammering in the nail.

Manipulation and Interaction

Robots are getting better at manipulating objects in dynamic environments and are getting easier to program. This new generation of “collaborative” robots ushers in an era of shepherding robots out of their cages and literally hand-in-hand with human workers who train them through physical demonstration.

The ease of trainability of these compliant robots will likely be useful especially for small and medium-sized enterprises which need to toggle between different products and produce in small batches. This saves time and costs on complex set-ups which can take weeks or even months, depending on the task, with traditional industrial robots.

Baxter lets factory workers guide its arm through a task and program the activity through physical example, rather through programming. It can learn — through relatively fast physical demonstration — simple, repetitive tasks like picking up parts off a conveyor or packing and unpacking boxes. Other compliant robots like Sawyer , KUKA , YuMi , UR3 and Justin which are safe to work along humans have just entered the market too.

Eventually robots will be able to understand and perform verbal instructions as they get better natural language and voice recognition capabilities.

Today robots can explore and build a model of their world, make plans for achieving targeted goals based on that model, and deal with changes and exceptions so as to respond appropriately. Baxter can now be given wheels. “Giving Baxter automated mobility opens up a world of new possibilities for a wide range of applications where mobility and manipulation are required, including service robotics, tele-operated robotics, and human robot interaction.”

Perhaps the most advanced mobile robot is the autonomous car which I discuss in the articles Self Driving Disruption. I also discuss how warehouses and delivery logistics can be automated in How Self Driving Cars Will Disrupt Retail. Flying robots such as Amazon’s Air Drone are being worked on to deliver items autonomously.

Amazon plans to roll out 10,000 Kiva robots into a network of warehouses by the end of 2015, a move that could realize fulfilment cost-savings by up to $900 million — or up to 40% savings on cost per order (on picking, packing and shipping).

Saving like these is why Amazon is investing in robot technology to perform the picking of items from the shelves.

One can imagine not too long after picking, placing on shelves will be added to service robots repertoire which could mean disruption for shelf stackers.

It is slow but expect it to speed up drastically over the next 20 years as computing power increases and computer vision improves exponentially.

Ask someone how long a million seconds is — the answer is 12 days. Ask them to think how long a billion seconds is and they may well answer in days. The answer is 32 years. People struggle with large linear numbers so it’s no surprise the idea of exponential sequences are hard to grasp for some. If I took 30 linear steps I'd end up 30 paces or 30 meters away. If I take 30 exponential steps, one, two, four, eight, sixteen, thirty-two… I would end up a billion meters away, which is twenty-six times around the planet.

If you took Intel’s first generation microchip, the 1971 4004, and the latest chip Intel has on the market today, the fifth-generation Core i5 processor, you can see the power of Moore’s Law at work. Intel’s latest chip offers 3,500 times more performance, is 90,000 times more energy efficient and about 60,000 times lower cost.

Vivek Wadhwa — “Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling. Within seven years — about when the iPhone 11 is likely to be released — the smartphones in our pockets will be as computationally intelligent as we are. It doesn't stop there, though. These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race.”

Marshall Brain“Scientists and engineers seem to be able to get around the limitations that threaten Moore’s law by developing new technologies. There may come a wall, when 2-D silicon chips as we know them today can become no faster in terms of their clock speed, and transistors can get no smaller. At that point it is likely that 3-dimensional chip stacking will take off, or chips will change their substrate from silicon to graphene, or quantum computing will mature, or completely new silicon architectures will emerge.

Computer power will become exponentially more powerful and AI algorithms being fed Big Data from the Internet of Things will evolve from a Smart Assistants like Siri, into Smart Workers and Smart Bosses and even to Smart Teachers and Doctors within the next 20 years. I discuss this thoroughly in my article Disruption Of Mental Work.

Scott Phoenix A human brain, for example, has about a thousand times as many neurons as a frog brain. Whereas it took evolution about 250 million years to achieve a thousand time increase in processing power, our computers improve a thousand times every 10 years or so.

Below is an example of the speed improvements of Baxter over a year. Baxter’s performance keeps improving through regular software releases, in 2014 it completed the same task almost 3 times faster versus the 2013 version of the software.

Direct human labor (picking, packing, sorting) remains one of the most expensive cost centers for e-commerce. The implications of easily automating such processes could be a game-changer for fulfillment centers and third-party logistics companies. Clearpath Robotics wants robots in every warehouse,“The long-term vision,” Drexler says, “is for a manufacturing or fulfillment facility where you can literally shut the lights off because everything is automated.

John Lizzi — “The new generation of robotics is riding a lot of trends, such as Moore’s Law, so we can put more intelligence on the robot itself. The costs of computation and sensors are coming down. There’s this whole movement around collaborative robotics with robots that are very easily taught, very cheap, and very able to work closely with humans, while employing new technology, such as SLAM (Simultaneous Localization And Mapping) for autonomous vehicles and similar applications. These are coming together and allow us to let the robots out of those cages and working in more dynamic, more unconstrained environments”

In the future retail shops will have available to buy — self service checkouts, fast robot shelf stackers and robot helpers. They will have to if they want to compete with automated factories and delivery, cutting costs allowing items to be cheaper and conveniently delivered on demand to people quickly. Retail and cashier are the 2 most popular occupations in the US, employing almost 8m people.

Japan will open the world’s first hotel entirely operated by Robot hotel staff this summer. Robots will carry luggage, clean rooms and manage reservations. “We will make the most efficient hotel in the world,” Hideo Sawada, company president, said during a press conference. “In the future, we’d like to have more than 90 percent of hotel services operated by robots.

Team KAIST recently managed to navigate DARPA’s Robotics obstacle course in under 45 minutes by successfully completing eight natural disaster-related tasks including walking over rubble, driving a car, tripping circuit breakers, and turning valves, demonstrating roughly the competence of a two-year-old child. How long until the technological breakthrough and innovation is applied to other tasks? The driverless car completed the Darpa challenge 10 years ago as an indicator to how fast this technology can improve.

Vivek Wadhwa — “After watching the DARPA Challenge and observing the rapid advances of computing, artificial intelligence, and sensor technologies, I see Rosie being very close to reality. These technologies are all advancing at exponential rates. And exponential technologies can be deceptive. Things move very slowly at first, but then disappointment turns into amazement. That is what I believe will happen with robotics over the next five to 10 years.

As well as getting faster the robots will get more dextrous. Below is a video of a surgery robot performing autonomously.

This is the current state, remembering exponential growth, it’s not out of science fiction to believe intricate tasks like surgery, dentistry or hair cutting could be performed safely autonomously some day in the future.

Jeremy Howard “In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down.

The Machine Learning Revolution is going to be very different, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what’s possible is different.”

Robot hand technology which can be used to interact with human tools is advancing, NASA has been making great strides in development of robotic limbs, they are about 15 years from creating robotic hands that perform as well as humans, many other companies and researchers are trying to improve dexterity too to allow better manipulation of objects.

Baxter and other compliant robots can be given better hand technology.

Compliant robots can reach places humans and old industry robots can’t.

When robots collaborate with other robots a greater range of tasks can be performed autonomously. Baxter and Sawyer together can address many of the estimated 90 percent of manufacturing tasks that cannot be feasibly automated with traditional solutions today.

The video below shows robots cooperating with others to deliver beer using advanced algorithms to handle uncertainty. Their key insight was to program the robots to view tasks much like humans do. As humans, we don’t have to think about every single footstep we take; through experience, such actions become second nature. With this in mind, the team programmed the robots to perform a series of “macro-actions” that each include multiple steps.

The researchers were ultimately able to develop the first planning approach to demonstrate optimized solutions for all three types of uncertainty; location, status, and behavior.

When you give two robots the ability to communicate in real-time, the possibilities for their teamwork are vast. Researchers at Carnegie Mellon University have done just that: enabled two types of robots with very different capabilities to collaborate in order to fulfill people’s requests.

Baxter is a stationary robot, equipped with two arms that can delicately manipulate objects, while CoBot has no arms but is adept at navigating indoor spaces and can reliably deliver objects using its front-end basket. The researchers wanted each robot’s strengths to make up for the other’s shortcomings, so they could work together to relieve humans of menial duties such as fetching and delivering objects throughout a building.“Designing every robot in a team to be good at everything is like the saying, ‘Jack of all trades, master of none.’ It ends up being very expensive, and redundant.”

People will get more comfortably interacting with robots over time as they develop emotional capabilities such as the new Pepper Robot.

Computer World : Businesses can rent a robot for customer care — “Dubbed Pepper for Biz, the business-focused robot is designed to come loaded with applications that allow it to handle office reception tasks and even approach potential customers. One, for instance, is being geared at helping the robot be a salesperson; it starts work as a sales clerk next year at Yamada Denki, a major electronics retailer.

That initial robot received a lot of headlines and attention around the world because SoftBank reported that Pepper can not only read human emotions but will generate its own emotions also. The robot, for example, reportedly can read people’s facial cues and will be happy to be with people it is familiar with and afraid when the lights go out. The robot might sigh when sad or bored, and it will display different colors on a display screen depending on its mood.

Robots in the not-so-distant future could be used in companies to do bigger jobs, like clean, deliver mail and refill printer ink and paper in office copy machines. That kind of robotics work could just be five years away, according to Moorhead.”

Manufacturing and Construction could be disrupted by advanced 3D printers and robotics. 3D printers allow you to print many materials like plastic, metal, wood, concrete, ceramic, carbon fiber, graphene, clothing, food, glass and optics for use in lenses. 3D printers are now being used to build a bridge in Amsterdam.

Chinese companies are using 3D printer to print homes, they have less wastage of materials and can be printed in a factory and quickly assembled on site. They may have growing pains and bugs initially but as the technology is refined to have electronics and plumbing built in and better energy efficiency, faster build time and have increased strength and durability than traditional building methods.

Other companies are working on automating parts of construction. There is a robot bricklayer called Hadrian which can build a whole house in two days, human house builders have to work for four to six weeks to put a house together as a comparison.

Imagine designing your actual house like you do on The Sims, the door, windows, floors, paint colours etc. then having a 3D printer print it overnight, transported to you by driverless trucks and have a Hadrian type robot assemble it the next day.

Service robots: The next big productivity platform Innovative new capabilities in robot cognition, vision and the physical manipulation of objects, and interaction with humans — delivered in loosely coupled, modular packages — define the dawn of an emerging market in service robots.

Price and Growth

The price in robots is dropping, this is why spending on robots worldwide is expected to jump from just over $15 billion in 2010 to about $67 billion by 2025. Robots presently perform only 10% of manufacturing tasks, but that is expected expected to rise to 25% by 2025 as robots become more affordable and able to perform more tasks. Within five to 10 years, the business case for robots in most industries will be compelling, even for many small and midsized manufacturers.

McKinsey Institute — Advanced robotics could generate a potential economic impact of between $1.9tn and $6.4tn per year by 2025.

The Financial Times reported that analysts are forecasting the payback period for industrial robots with a life cycle of 10 years has shorten to 1.3 years in 2016 from 11.8 years in 2008 and will likely shorten again to 1.3 years in 2016.

The total cost of owning and operating an advanced robotic spot welder has plunged 27% from an average of $182,000 in 2005 to $133,000 in 2014, the price is forecast to drop by a further 22% by 2025. At some factories, robots are even building other robots, producing about 50 robots per 24-hour shift and operating unsupervised for as long as 30 days at a time.

Robots are getting so cheap they are replacing Chinese workers. Chairman Terry Gou said Foxconn Technology Group will be able to replace 30% of its workers on the production lines with robots in five years’ time. Guangdong authorities said in March that they will invest $152 billion to replace humans with robots with a zero-labor factory within three years. The local government will push for application of robots in 1,950 companies across the province and plans to build two advanced industrial bases for robot production by the end of 2017.

The Changying Precision Technology Company in Dongguan has replaced some 600 human assembly line workers with 60 robots, resulting in a fivefold reduction in manufacturing errors and an increase in production of over 250 percent. The city of Dongguan plans to finish 1,000 to 1,500 “Robot Replace Human” programs by 2016, which would vastly increase production and improve quality while putting nearly a million people out of work.

According to a recent Wintergreen Research report, the worldwide industrial robot market will grow at 11.5% annually and reach $48.9B by 2021. But another area of robotics is growing at a much faster rate: service robots, i.e., non-manufacturing robots for professional use, for defense, in the field, for logistics, medical, personal, entertainment and household use, are becoming a booming segment of the global robotics industry with a 20% or higher year-over-year growth rate.

According to a new study published by ABI Research entitled Collaborative Robotics: State of the Market / State of the Art, the collaborative robotics sector is expected to increase roughly tenfold between 2015 and 2020, reaching over $1 billion from approximately $95M in 2015. At present, collaborative robots represent 5% of the overall robot market but with strong growth expectations for the future. Insiders suggest more rapid growth: that collaborative lightweight robots will become the top seller in the industry in about 2 years, selling hundreds of thousands of them and with prices falling to the $10,000 price point.

Vivek Wadhwa — Amazing progress is being made in the underlying hardware and software. In part, that’s because costs have plunged. The single-axis controller, a core part of most robots’ inner working, has plunged in price from $1,000 to $10. And according to Rob Nail, the chief executive of Singularity University, the price of critical sensors for navigation and obstacle avoidance has fallen from $5,000 to less than $100.”

The BLS predicts services to be 80.9% of the labour force in 2022. Humans now have less to offer in making goods compared to machines, whenever you manufacture something you think can a machine do this better/cheaper, could the same happen to providing services? There is no law of economics that says a product or service must require human labor.

Jeremy Howard — It seems to me that humans can provide three basic inputs into processes: energy, perception, and judgement. The Industrial Revolution removed the need for humans to provide energy inputs into processes, and therefore in the developed world today nearly all jobs are in the service sector. Computers are now approaching or passing human capability in perception, and in the areas where the bulk of the employment exists also surpass human capabilities in judgement (because computers can processed more data in a less biased way). So if economists are going to claim that there will be new industries which require human input, I think they need to explain exactly what the human will be doing, and why the computer would not be able to do it at least as well.

It is in the interest of every capitalist company to employ a minimum number of workers, pay them a minimum salary and have the highest productivity when manufacturing a product, as robots got better capabilities and cheaper than humans they entered the factory floor, the same fate will be for many service jobs as service robots and software get cheaper in price and better capabilities.

Jeff Burnstein, President, Robotic Industries Association — “You have to use technology in any way you can to stay competitive, to reduce overall cost, to increase quality and productivity. There are companies that will tell you that the only reason they came back to the US is because of the automation. And I think this is a trend we’ll see persist in the future.”

We are looking at the possibility of vastly different models of employment and a reduced need for roles in popular occupations. Baxter has a base-price of $25,000. When a company has to make a decision between an upfront cost for hiring new staff cost + wages or a smart machine + electricity cost, which will make more financial sense 20 years from now?

In 20 years with robots having 5G ubiquitous internet access to a cloud of robotic knowledge, with billions, maybe trillions of sensors providing data about the world which with exponentially increased computing power machines will be able to process. This will allow a much more hyperconnected and powerful AI spawn of Siri and Watson with expert knowledge of all domains and how to interact with the environment and objects in the world. This cognitive power combined with an cheaper advanced version of a mobile Baxter with better than human dexterity, speed and natural language understanding, will be able to accomplish complex tasks just by speaking instructions to it.

This will impact many people as they find they can not compete with machines which will work 24/7, every day of the year with no breaks, holidays, sick days, lateness, distractions, tiredness, injuries, hangovers, complaints, strikes, wage rises, overtime pay, training or retraining, compensation, insurance, severance pay, contract negotiations, bonuses, maternity leave, law suits, they require no pensions, lighting, heat, air conditioning, supervision and only demand a ‘salary’ equal to the cost of energy.

Instead of outsourcing companies will look to ‘botsourcing’ with machines which will always follow procedure, be reliable, consistent, won’t procrastinate, talk back or unionise. Machines that behave rationally without emotion eliminating human error or bias, they won’t commit occupational fraud or disclose proprietary information, they can be easily improved with new software updates, they require less floor space such as staff rooms, toilets or kitchen facilities and the machines get cheaper, smaller, smarter, more energy efficient and double in computer power every 2 years. Humans need not apply.

Conclusion

In the very distant future, you can imagine robots and technology being so advanced and so productive they could create abundance and allow all our needs to be taken care off. Automated Mining could allow machines to extract all the resources we need. Asteroid mining may be possible in the very distant future for an abundance of resources.

A company is working to automate its drilling and crushing as well as the dozens of mile-long trains that ship nearly a million tonnes of iron ore to the coast each day. The materials and minerals collected can be shipped onto autonomous trucks to autonomous ports to get to manufacturers who use 3D printers or robots to produce the goods we need.

They could make other 3D printers and robots or create solar panels, storage batteries, wind turbines, nuclear fusion reactors and so on to self sustain the process of extraction and manufacturing, and create an abundance of energy and products.

With abundant cheap energy, fresh water can be desalinated from the sea and agriculture can then be done indoors with robots planting and harvesting and autonomous cars delivering the food. This could become so efficient that it could be performed in homes. The world’s largest indoor farm — 25,000 square feet of futuristic garden beds nurtured by 17,500 LED lights in a bacteria-free, pesticide-free environment produces about 10,000 heads of fresh lettuce harvested each day. The unique “plant factory” is so efficient that it cuts food waste from the 30 to 40 percent typically seen for lettuce grown outdoors to less than 3 percent for their coreless lettuce.

In 2 years the cost of lab-grown burger has dropped from $325,000 to $11.36, if the trend continues eventually it could be cheaper, tastier, healthier(No saturated fat, no heme iron, no growth hormone). Cultured meat seems to have many potential benefits as well as no more slaughtering of animals, even milk could one day be created in labs.

AI will become the best doctor and teacher, Robots will become the best surgeons, cooks and possibly nurse one day and so on to create an abundance of services. The machines will begin to race ahead of what our, as of now, limited skills and capabilities have to offer in a labor driven economy. Baxter version 11.0 might be so cheap yet powerful it can work in your home environment, cooking you food, cleaning your house, making your bed. etc.

We can see a huge disruption for human roles in energy, manufacturing, agriculture, mining and service industries in the future and a real possibility of technological unemployment.

How do we transition to this era of abundance smoothly and distribute the wealth this productivity creates?

The Government needs to incentivize Worker Self-Directed Enterprises(Worker Co-op’s) and encourage an increase in Unions(especially for self employed workers) to bring democracy to the workplace. This will reduce income/wealth inequality which will raise working people’s wages. It will also keep jobs from being outsourced, reduce climate change and force companies to invest and allow their workers time to transition to new jobs or skills, if they decide to automate their roles.

A Lower Work Week and Increased Vacation Time(which is achievable as people’s incomes have been raised by Worker Co-ops/Unions) to spread out decreasing amount of jobs more equally which will increase economic productivity and prevent social division between people who work and those who don’t or can’t work. The Government can also top up low wages by raising taxes on the super rich and preventing tax evasion. This will give us more leisure time to enjoy the wealth and prosperity this technology should rightfully be bringing us.

A solution discussed in The Disruption of Mental Work is Improving Education for people to gain higher skills at any age quickly and cheaply, the final solution is a Guaranteed Basic Income, which is needed as a safety net for people disrupted and unemployed, so they can retrain their skills for more complex jobs, without the fear of homelessness, child poverty for their children or going hungry. I discuss the solutions in more detail on my website.

Erik Brynjolfsson — “It’s not a matter of slowing down the technology, it’s a matter of speeding up our response to it.”

--

--

Nathan Leigh

In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move - Douglas Adams