Starsky’s Approach To Solving Long-Haul Trucking — And Why It Doesn’t Include LIDAR
We’re hyper-focused on automating a highly constrained use case: the operation of rigs on highways. For the tricky parts we rely on tele-operation by human drivers.
The original article was published in IEEE Spectrum
Last year a partner at a well-known Silicon Valley venture firm wouldn’t take a meeting with us because our autonomous system didn’t employ a LIDAR. According to him, we should use all the sensors available to us to ensure safety.
However, the issue keeps coming up. Why don’t you use LIDAR, people ask. With this blog I’m going to sketch out Starsky’s approach, and then I want to specifically come back to LIDAR and explain why it doesn’t fit in our strategy.
Like a lot of players in the industry, Starsky Robotics is working to automate the act of driving. However, we’re only focused on solving a very specific problem using automated driving features. We are not trying to automate every bit of driving. For example, we’re not trying to invent a vehicle that can navigate crowded and chaotic urban environments.
Rather, we sidestep that issue entirely. We have focused on a single, individual problem in the logistics and transportation industry — and then we’re employing elegant engineering to solve it.
The problem: Not enough truck drivers exist. Truck driving as a job has some major drawbacks. It requires people to stay on the road for long hours, keeping them away from their family and friends for weeks and sometimes months at a time. Plus it’s tedious. The shortage of truck drivers makes it more difficult to move goods about the country — delaying deliveries and increasing the price of the things Americans want to buy.
To solve the driver shortage, we’re automating the easiest part of that task — highway driving. To get the tractor-trailer onto and off the highway, we employ trained and experienced truck drivers, who operate the rigs remotely (known as tele-operation or teleop). The only part of the driving that is automated is the highway driving. We also have humans in the loop to supervise during tricky highway situations, which amounts to less than one percent of the actual driving.
Take one typical journey: A run that goes from a distribution center in Hayward, California, to another distribution center in Georgia. The warehouse in Hayward is three miles from the highway. Then there’s 2,613 miles of driving along interstate-class highways. And then, in Georgia, the destination distribution center is less than a mile from the highway. So about four miles in total of non-highway driving.
That’s what we’re automating — the 99 per cent of the journey on highways.
(Actually, in the case of the California-to-Georgia journey, that’s 99.85 percent, but you get the idea.)
To solve highway driving, we took a hard look at the sensors that humans use. Highways are designed with long straightaways and gentle curves so that operating them is easy for humans at legal speed limits, using human eyes and human reaction time. So as our primary sensor we used the tool that is closest to the human eye — the camera. The specific cameras we use are automotive-grade, which means they’ve been engineered to work in a variety of conditions that confront an automobile through its life cycle. They also are relatively cheap as well as easily available off-the-shelf.
Another great thing about cameras is that they are highly customizable based on the use-case. Our prototype truck employs seven different cameras. Each one is configured and oriented to monitor a specific field of view, for a full 360 degrees of coverage.
Cameras alone, however are not sufficient. For more reliability in our measurements we need parallel detections using a different set of physics. The argument to use radar is pretty simple and strong. Radar has existed since the Second World War. It is an automotive-grade sensor that exists on the market for a reasonable cost and has well-documented and well-understood limitations. The output that radar provides our software is reliable. Radar is really good at sensing the existence of a potential obstacle, along with its velocity. The limitation is that it doesn’t do a great job of identifying the precise location of the obstacle. It is also prone to creating a lot of false positives. For example, a manhole cover, a plastic bag, or an overpass create returns. To filter out the false positives, we use sensor fusion with the cameras to have more confidence identifying potential obstacles.
Going down the list of sensors with different physics, we also considered LIDAR.
There are things that LIDAR does well. LIDAR is a really good low-level obstacle detector. And it also provides good visibility in low-light conditions. But after a lot of consideration we concluded that LIDAR’s weaknesses outweigh those advantages in the context of our use-case.
First, let’s consider the range of LIDAR. For LIDAR to provide a significant safety benefit it has to provide low-level obstacle detection at the distances we require. For fully loaded trucks, that’s 150 to 200 meters at highway speeds. We require that range so that we have enough time to take appropriate action when obstacles are identified. At such distances, the long-range LIDARs available in the market today have return points that are too spread apart to provide the information required, rendering them useless.
Next, LIDAR doesn’t have the reliability we require. I mentioned before that we use components that are automotive grade, that we’re certain will work over the lifetime of our vehicles, in all the various conditions we require. The current state of LIDAR isn’t there yet. Some of them tend to spin themselves apart. Others are stuck together with glue. Their faulty construction means that many of the sensors will fall apart after three to six months of use.
It is possible that LIDAR would help us to classify obstacles a hundred meters ahead. Whether that obstacle on the road is a mattress, a couch, a person or an alligator. However for us, it is an easier job to classify a “strange object” and ask for help from a human tele-operator, than to rely on software to confidently classify an object and make a decision. Remember, we’re using tele-operators to supervise our system when conditions become tricky. We don’t have to know what that strange object is up ahead. All that we have to know is that there is a strange obstacle. The percentage-point safety improvements that we can get from LIDAR, we can also get by incorporating humans into the decision-making process — without the drawbacks of LIDAR.
We are solving a highly constrained problem. We transport rigs to and from distribution centers. Tele-operators drive the few miles on and off the highway. And then, for the long, boring stretches of highway driving, we use autonomy to maintain the rig at a constant speed on straight-line or gently curved roads. Cameras and radar are all the tools we require to do that.
Why don’t we use LIDAR? Because we don’t need to.
Getting back to that VC I mentioned at the beginning of this blog: Not agreeing to a design approach just because it doesn’t incorporate a tool (LIDAR in this case) is not good engineering. It’s like not using carpenters to build a deck because they don’t have a resistance-welding torch.
We feel the same way about LIDAR in our highly constrained use case. It’s a bad engineering choice to use a certain tool just because it’s available to you. We don’t do science for the sake of science. We want to use existing and well-established technology to solve an urgent problem today.
We believe that engineering is the application of science with real-world constraints. LIDAR technology, applied to autonomous driving, is a good science project that gets in the way of engineering. When you’re only driving on highways, with human tele-operators available to supervise the system in tricky situations, LIDAR isn’t necessary. And at Starsky Robotics, we’re all about engineering solutions to problems in the most efficient way possible.