Autonomy vs Intelligence in Robotics
Re-thinking robot design: don’t restrict your robot worldview to robot (a) does task (b).
In this post, I address two questions:
- Is automation intelligence — is one a prerequisite for another?
- How do we train agents to act in ways that align with human goals — what limits are there on automation with the standard model.
I am intentionally avoiding a couple of hot topics in reinforcement learning that pertains to generalization/task transfer (such as meta-learning or other generalizable methods). I think that most generalization to date is proportional to data-distribution coverage of multiple tasks — not the ability for the same hyperparameters to solve numerically distinct tasks.
The good news: I think expanding our robot worldview can lead to robots that can generalize and help humans all the same.
Hardware-Task Relationship:
To start, consider this video. In it, you’ll see an agent accomplishing a task: swimming up stream.
Now, I know some of you didn’t click the link and that’s okay, but the catch is that it is a dead fish swimming upstream. Nature has figured some things out about design that human’s creating products may never get to tap into: passive design.
Robots are designed with a task in mind, and this task can limit them. I start with hardware because