Let robots learn like a child
Let’s talk e-commerce first. We are accustomed to thinking of e-commerce as something that appeared at the dawn of the Internet, then experienced rapid growth during the advent of Web 2.0 and became firmly established in our lives. All big players like Amazon, Alibaba, and JD have eaten e-comm market and innovations are rare or redundant. What surprised me is that e-commerce is still only a small fraction of retail spending. It takes only 10% worldwide with projected YoY growth 2% over the next decade. We will see how many untouched areas of retail will be drastically transformed by new technologies.
Surely enough, there is always a catch. As Benedict Evans emphasizes in his talk at Andreessen Horowitz Summit last November the era of simple tools for low touch goods has passed. Explicit examples include airplane ticket aggregators, restaurant review sites. Now we face the need for full stack solutions where the information is used as a system rather than arbitrage. Those kinds of solutions require a lot more investments compared to services developed over the last 20 years. Another good example taken from Benedict’s talk is Yelp vs. DoorDash. We used to do restaurant reviews and now you can get a hot meal delivered to your door in less than 30-minutes. The same goes for apartment listings. Now with Opendoor, you can buy a home.
The rapid growth in e-commerce caused a surge in demand for delivery, particularly its most expensive part — the point at which the package arrives at the buyer’s door, or so-called last-mile delivery. The Internet made us accustomed to instant services. We are keen on two-day delivery, same-day delivery, and in some cases — instant delivery. However, we are not ready to pay a premium for FMCG and restaurants delivery, McKinsey study said.
Many startups and incumbents are aware of this opportunity and are trying to respond to this market needs using autonomous ground robots. A tsunami of great promise and huge investments in automotive self-driving was followed by a wave of interest in smaller unmanned ground vehicles.
If you’ll follow the AGV / UGV startup life cycle, you can see a pattern. They are mainly focused on hardware, mechanics and sensors. Hence, delivery is not a luxury industry where you can charge a premium for expensive sensors. The attempt to reduce the cost of sensors only makes autonomous navigation task much harder with a classical approach.
For driving policy AGV always use classical computer vision based approach that inherited from the automotive self-driving. In fact, most of them poorly utilize machine learning and only for perception part. The classical self-driving approach to autonomous driving produced numerous limited pilots in geo-fenced areas. But after 15 years of development with $5bn spent annually, there are still no fully autonomous commercial vehicles.
It is enough to compare the complexity of rule-based systems for ADAS (advanced driver-assistance systems), to understand that intelligent behavior cannot be programmed in an explicit manner.
It gets even worse in pedestrian zones devoid of structure. As the founders themselves admit that unlike a road, a sidewalk full of people and obstacles. It’s chaos.
But wait! Why do we need more sensors and rules instead of better brains? A person copes with the task of driving a car with only a stereo vision and vestibular apparatus.
Try to remember where you got driving or riding skills for the first time? Do you remember how your father or mother taught you how to handle a bike? Your father obviously did not give you a notebook with analytical formulas for balance and trip planning. You certainly didn’t draw bounding boxes around objects to find your way through the crowd.
You simply looked around and learned by the means of trials and errors. I think everyone remembers hurt knees and bruises. At least I remember a little.
We created our system in a similar way by allowing the robot to learn as a child. We placed a complex deep neural network at its core and used only low-res cameras as cheap sensors. This approach uses a huge amount of data and cameras provide it well enough.
A sequence of images and control signals were fed as input to the neural network. With the help of a couple of modern machine learning techniques, this approach alone can achieve good results. It was reflected in the work of Nvidia and Intel Labs.
In the second stage, we use reinforcement learning. We train a robot without direct supervision. It learns from trials and errors. For training, we used both the real environment and the CARLA simulator.
Combining two approaches allowed us to achieve impressive results on the low-res cameras alone.
This proof of the concept shows the applicability of our approach for autonomous vehicles navigation in unmapped areas.
One of the amusing findings was that the robot does not only learn like a human but also fails in the same manner. On the GIF you can see how robot bumps into the glass door during a test. In our office, there is a poor fellow mathematician who meets his forehead with this glass every week.
What’s next for nopilot?
We designed the new chassis with additional sensors and selected just-released Nvidia Jetson Nano as the GPU. The total cost of the setup is less than $1k. We called our new platform MULE (Multipurpose Unmanned Learning-based Electric vehicle). I would also like to emphasize that we do not concentrate on hardware, but our business model based on software IP Licensing. In our case, the hardware is only a tool needed for running technology and software validation.
We strongly believe that the only way to make robots smart enough for real-life tasks is to let them learn. In our desire to teach, not program robots, we are not alone. Wayve is the only company in SD which apply sophisticated models to autonomous cars. It is also worth mentioning the recent success of Open.ai in learning robot hand and ETH Zurich in “Recovery Quadrupedal Robot using Deep Reinforcement Learning”.
We will continue our experiments with various models in simulation and in a real environment. If you want to watch our robot “grow up” and getting more sophisticated, do not forget to join our journey!