Artificial Intelligence Intersects with Robotics
In his twelfth post in the series, Marshall Kirkpatrick focuses on the intersection between artificial intelligence and robotics. By way of reminder, Marshall launched a 30 day series that explores the intersection between AI and the various innovation components on my emerging futures visual.
As he has in each post, Marshall identifies the key subject matter experts that sit at the intersection of AI and the visual component in question. In the case of robotics, the key influencers are: Miles Brundage, Will Knight, Josh Bongard, Sarah Chan, Beth Singler, Youmi Sa, Sabine Hauert, and Camilo F.. Here is the foresight and related future scenarios identified at the intersection of Artificial Intelligence and robotics (taken straight from Marshall’s post):
Robots teach each other: Sometimes referred to as “cloud robotics,” networks of robots are already teaching one another about what they learn as they interact with the world. This co-evolution could occur rapidly and enable robots to quickly become even more physically and mentally capable of engaging with the world than any single human being.
Robots using their imagination: The leading AI sub-discipline of Deep Learning is being used by robots to learn how to do new things without being taught. Like human imagination, this paradigm combines known data and random input into experiments that the system learns from. This video, shows a robot trying to move its torso under the control of multiple neural networks, something it practiced visualizing first. Evidence that robots may learn how to do things humans have never thought of.
Robots that kill: Should nation states deploy lethal autonomous weapon systems? That’s an active debate in the United Nations specifically and the international community in general. Berkley Computer Science Professor Stuart Russel said in Nature last year:
“One can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their maneuverability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium.”
He calls on academics to take action: “This is not a desirable future.”
That last point on a desirable future speaks to the growing dialog around ethics. I focused on the topic in this recent post, which explored Ray Kurzweil’s prediction of non-biological thinking on the horizon. I closed the post with a poll, and the results (see visual below) surprised me a great deal. Please add your voice and take the poll.
The intersection analysis that Marshall pursues via his posts is a great example of deriving the foresight required to navigate in this emerging future. Future thinking — the rehearsal of our emerging future — is increasingly a critical but complex piece of the equation going forward. The other posts in the series on AI and intersections can be found via the links below:
Originally published at frankdiana.net on August 22, 2016.