AI and Robotics — the Perfect Complement

Humans For AI
5 min readJun 12, 2017

--

Written by Karin Hollerbach

Robotics and Artificial Intelligence, or AI, have been closely linked almost since the beginning of robotics, although the word “robot” is said to have been coined as early as 1920 by the Czech playwright, K. Čapek, and the term “robotics” was likely first used by science fiction writer, Isaac Asimov — both clearly pre-dating automation and computation as we know it today.

I tend to think of robotics as being in some sense a physical manifestation of AI, i.e., where AI meets the real world and “does” something (physical). In more familiar terms, AI represents the brain (together with the rest of the nervous system) and the robot represents the body. Both complement each other’s functions.

Human brain — body analogy

In a very simplistic model of the human system, our brains take input data from our various senses, including what we see, hear, feel, taste, etc., and decide what to do, whether that’s lower level functions such as breathing or higher level activities such as driving a car (at least until AI gives us self driving cars — and then we can consider the car itself to be a robot!). Similarly, you might think of a robot’s brain (AI) as taking in data (maybe from position sensors of a moving robot limb or from environmental sensors such as temperature or humidity sensors or any of a myriad other types) and calculating how the robot (body) should then behave — i.e., what it does, in physical terms.

Early robots — artificial, but not very intelligent

Early versions of robots had very simple control systems. For example, a simplistic feedback controller might maintain a value of a certain measured variable. If the value of the variable goes up too much (say, a position value moves out of its target range), then the robot’s controller (brain) tells it to do something different, which has the effect of reducing the value of the variable (e.g., move back to the target position). This can hardly be called “intelligence” although it is a primitive form of the same concept: data input causes some calculation to occur, which then determines the action of the robot, which in turn creates some interaction with the environment and generates new data to be considered.

Increasing sophistication of a robot’s AI

Today’s robots have vastly more intelligent control mechanisms: Many take in and process enormous amounts of data. This is driven by revolutions in data generation — think of all the IoT sensors out there in the environment already, collectively generating massive data streams; in data analysis — due to both increased computational powers and our increasingly sophisticated analytical tools and algorithms; in speed of processing — for a given amount of data processing and “thinking” to be done, faster is not only, well, faster (and who doesn’t want that?) but typically more stable from a robotic control point of view; and others.

With all this increased intelligence, today’s robots can crawl, walk, run, swim, and fly. They can navigate in changing and complex environments. They can go places that are far too remote (like Mars) or too dangerous (like areas of high radioactivity) for humans and can help us explore them. They can perform acts requiring a great deal of strength — and even interact directly with humans to augment our own power — or a great deal of finesse to handle fragile things without breaking them. All of this requires fairly complex intelligence.

Multi-level control and cooperating robots

To extend the human brain / body analogy further: The human nervous system actually consists of not only a single centralized intelligence (brain) that controls all actuators (muscles) in its robot (body). Instead, bodies also contain some nervous system activity distributed throughout the body, which at a subconscious level adds to the calculations that control the rest of the physical body. That is, we have some degree of decentralized control, even a multi-level (hierarchical) control system, with the brain function representing the centralized computer that controls the whole system, and decentralized micro computers distributed throughout different parts of the body and controlling their own, localized sub-portions of the body, yet governed by the brain as well.

This is very similar to hybrid control mechanisms relying on AI, partially with a centralized AI control mechanism and partially with localized computational control. Examples of this may be seen both in single robots that may have one central controller that “sees” the whole robot plus localized controllers that have an input on, say, a single limb or a single motor driving one moving mechanism in the robot.

Even more interesting from an AI perspective is when this decentralized model is applied to many robots that collaborate: There may be a swarm of robots that together move through a hostile or changing environment, or one that must actively collaborate to accomplish a task. We might think of the analogy as being one in which several humans, each of which knows something but not the entire picture, must accomplish something together.

In humans, we might call this “teamwork” and not think of it as being terribly remarkable, although it may be somewhat more impressive when extended into areas that require learning new insights and mastering new skills. In AI, this represents substantial advances in sensing, communications, learning, and centralized and/or decentralized computing of desired outcomes as well as algorithms for achieving them.

Where is all this headed?

Why do we care that we can now do things like collaboration among robots in complicated real-world environments, when this is a common, everyday task that people do, often without thinking too much about it? Think about what happens when we bring together individual AIs (such as IBM’s Watson), robotic “bodies” for them to move around in the world, and we have them learn from and collaborate among each other, and finally we have them interactively collaborate even with humans in the loops, all while learning about their changing environments — controlled variably by centralized AI, by decentralized/localized AI, and by human intelligence. Combine that with sophisticated advances in materials (to create new sensors or new types of robot bodies) and other control algorithms (that can control, for example, soft robot bodies, not just the more familiar rigid structures) and limitless possibilities for how AI-driven robots can enhance our world in very real ways.

Even more generalized than “teamwork” or “collaboration”, we might think of being in relationship to one another. Again, this is something we humans are used to doing on a daily basis. Due to advances in AI, we’ve seen a proliferation of robots beginning to behave as if they were relating to humans — or at least, humans are starting to form attachments to robots (AIs on smartphones, such as Siri, or actual brain + body robots, such as Cozmo). Are robots actually forming relationships with us? Maybe not on their end… even if it can feel that way to us.

I have always thought that the most interesting things occur in diverse, interdisciplinary environments. AI and robotics are now at a point where they can truly benefit from advances on both halves of that body-brain / physical-virtual system. In the near future, I believe we are going to see enormous transformations in our interactive, robotic-AI-human-physical-virtual world.

References:

https://mars.nasa.gov/mer/home/

https://groups.csail.mit.edu/drl/wiki/index.php?title=Soft_Robotics

http://www.cscjournals.org/manuscript/Journals/IJE/Volume7/Issue2/IJE-444.pdf

https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/

https://www.theverge.com/2016/10/14/13276752/anki-cozmo-review-ai-robot-toy

About the Author:

Karin volunteered for Humans For AI, a non-profit focused on building a more diverse workforce for the future leveraging AI technologies. Learn more about us and join us as we embark on this journey to make a difference!

--

--