The Startup
Published in

The Startup

Is “Sensor Fusion” The Magic Answer?

I discussed the debate between cameras and LiDAR sensors as the “eyes” of the autonomous vehicle last article. This article deals with the complicated process of “sensor fusion.” This is the second part of my chapter related to the different sensors for autonomous vehicles in my book The Future is Autonomous: The U.S.and China Race to Develop the Driverless Car.

“Sensor fusion” is used to detect when a camera or sensor has inaccurate “noise,” for example when rain and fog blurring camera visibility. The system then uses filter algorithms so another sensor can make up for the inaccurate “noisy” data from the camera during a rainstorm. The use of this system would require more sensors. This would increase the cost of the vehicle itself. It would also require more computer engineers to program these filters into the car’s ADAS “brain.” This would increase the R&D and maintenance costs of running an autonomous vehicle company unless another business or financial framework is created by the companies.

Sensor Data Fusion Offers a Potential Solution, But at a Price

The most important criticism of LiDAR sensors is the cost. The cost of the sensors is important because, according to the 2019 Strategy& Digital Auto Report by PricewaterhouseCoopers, autonomous driving systems will add twelve to twenty-two percent to the price of a vehicle. Most of the increased price is LiDAR sensor systems. A common LiDAR supplier and the inventor of LiDAR sensors for vehicles, Velodyne, set their cost at as much as $8,000 each before recently dropping the price to half that amount.

Different autonomous vehicle companies have different standards for how many LiDAR sensor “pucks” are necessary for each vehicle. Waymo, for example, in its new Fifth-Generation Jaguar I-Pace autonomous electric SUV uses three custom-built LiDAR sensors in every vehicle. One sensor is for longer ranges, which Waymo claims their sensor can see three hundred meters away clearly as opposed to the maximum range of Velodyn’s at two hundred meters. They each also include sensors for both short- and medium-range visibility.

PwC’s 2019 Digital Auto Report states suppliers and OEMs, or original equipment manufacturers (organizations which manufacture devices from component parts bought from other organizations), would have to cut technology costs by sixty-five to seventy-five percent by 2030 for autonomous vehicles to be profitable.

I spoke to a business consultant and friend who specializes in advising new technology companies, including autonomous vehicle companies, and asked his name and company remain anonymous. I asked him about the need for a drastic reduction in the technology cost of autonomous vehicles stated in the PwC’s report. He began by talking about the automotive industry from the perspective of OEMs. He said, “For the OEMs, it’s like a ticket to the future (for potentially significant profits) …long term they remain very confident about when they check this ticket, but their primary focus is still on the conventional vehicles because that’s how they are making money.” Autonomous vehicles need to have mature technology and a mature business strategy before they are produced on a large scale and become profitable.

Why does the added cost of LiDAR sensors make Musk and Professor X hesitant to join the rest of the companies in the industry in adding LiDAR sensors to their autonomous driving “safety stack?” For Elon Musk, who recently sold his one millionth electric vehicle at Tesla, he would like to begin producing large numbers of autonomous Tesla vehicles as soon as possible. Even if Tesla made its own LiDAR sensors like Waymo does, it would be hard to mass produce five hundred thousand Tesla vehicles with such a technically complex sensor system. It would also be difficult, if not impossible at this time, to price the vehicles so an individual consumer would be able to afford them.

Meanwhile, Professor X’s hesitation to include LiDAR sensors at AutoX is based, at least in part, on his personal background. Because he grew up in a poor family that never owned a car, he claims “autonomous driving should not be a luxury” and wants to make the option of driving in an autonomous vehicle an option for everyone.

It’s up for debate whether these statements are just for publicity or from a sense of genuine altruism. Professor X has a PhD from MIT and has specialized training in camera vision tactics. These tactics would allow cameras at different angles to create 3D images of the world around the vehicle. Therefore, this statement could be a combination of altruism and speaking from a position of technical expertise.

At Tesla’s Autonomy Day, Andrej Karparthy, senior director of AI at Tesla, described how cameras provide the data necessary for the vehicle to drive autonomously. Karparthy claims the world is built for visual recognition. According to Karparthy, “In that sense, LiDAR is really a shortcut.” He continued on to say, “It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress and is ultimately a crutch.” Tesla believes the higher resolution images received by the cameras can provide better data for object recognition and then the automated driving system “brain” can more easily recognize and avoid obstacles.

As a test of the legitimacy of this approach, Professor X bought fifty dollar cameras at Best Buy and added them to an AutoX vehicle equipped with his “computer learning” and autonomous driving systems. “It could not be cheaper than that,” Professor X said. In the video of the test, the car handled a number of different driving tasks with ease. While this was just a short demonstration, it did demonstrate a vehicle can drive autonomously even when it only uses cheap cameras to provide it with visual data.

Professor X and Elon Musk do not just use cameras in their vehicles. Even though they represent outliers because they do not use LiDAR sensors, they use ultrasonic sensors which are sensors with limited range. They detect static vehicles for things like parking. Their vehicles also have GPS so the vehicle knows where it is, and odometers and wheel sensors to measure the speed it is travelling by measuring the displacement of the wheels. The vehicles also have radar, which can detect where objects are in relation to the vehicle and determine their velocity. Because of the limited image resolution of radar, it cannot properly identify the object.

Recently there have been two significant developments in relation to LiDAR sensors. The first is Bosch, the popular Tier-1 parts supplier for vehicles, that announced in January 2020 at the Consumer Electronics Show in Las Vegas, Nevada it will be producing LiDAR sensors for autonomous vehicle.

This is significant because Bosch aims to lower the cost of this technology by exploiting economies of scale. “By filling the sensor gap, Bosch is making automated driving a viable possibility in the first place,” said Harold Kroger, member of Bosch’s management board. It is still too early to know if this will be the case. It could potentially solve the worries about the price of LiDAR sensors and allow for upscaling the production of this technology because Bosch is a trusted Tier-1 supplier of other vehicle components, like radar and cameras.

Another new development of note is the Cheetah LiDAR sensor system designed by Innnovusion, Inc. This LiDAR system uses a unique rotating polygon approach which is combined with proprietary detector electronics, advanced optics, and sophisticated software algorithms. While how it works may be confusing to non-technophiles, the result is the Chetah LiDAR system has a detection range of two hundred meters and can detect objects as far as two hundred eighty meters. The sophisticated rotating polygon framework allows for image-quality resolution of three hundred vertical pixels.

The innovative design also allows the Cheetah LiDAR system to run at under forty Watts, which is claimed to be the most energy efficient of any high-quality LiDAR system. The price of thirty-five thousand dollars for low quantities could scare people, but Innovusion claims with this system used as the primary sensor in the “safety stack, there would only be the need for one LiDAR sensor.

Assuming the vehicle is a new fifth generation Waymo autonomous vehicle, equipped with all of the different sensors, how exactly does the “sensor fusion” work to ensure the optimal way for the vehicle to perceive its surroundings? Each sensor has different advantages and disadvantages. Cameras are the best tools to detect roads, read signs, or recognize another vehicle or pedestrian. LiDAR sensors are better at accurately estimating the position of the vehicle in relation to other objects. Radar is used to accurately estimate the speed of the different objects it detects.

To merge the data with “sensor fusion,” people use an algorithm called a Kalman filter. This is one of the most popular algorithms for data fusion and is used by cell phones and satellites for navigation and tracking. It was most famously used for the Apollo 11 mission to send the crew to the moon and bring them back.

How the Kalman filter works specifically can get very technical and involve complex mathematical algorithms. A Kalman filter can be used for data fusion (data from two different sensors at once) to measure the state of a system. This state can be dynamic (evolving with time), in the present (filtering), the past (smoothing), or the future (predictive).

For example, an autonomous vehicle could receive data from cameras indicating a pedestrian jaywalking crossing the street is travelling at eight miles per hour and is in the middle of the road. However, radar could say the pedestrian is crossing the street at only five miles per hour. By performing a fusion of data from different sensors using the Kalman filter, the “noise” generated by either filter is reduced.

This allows the vehicle to more accurately judge the speed in which the pedestrian is crossing the road. This can be done to more accurately measure the distance of an object from the vehicle as well if sensors are merged with the LiDAR and radar. Different sensors can produce a more accurate perception of the world around the vehicle and allow it to drive more safely.

The specific mechanism for merging these sensors can be incredibly complex. The sensor data itself can be vastly different between different sensors. There is the image data received by cameras and the 3D “point cloud” generated by LiDAR sensors. The camera data would need to be scaled down to the image resolution of the LiDAR data to be compatible, and then have a scaling algorithm to increase the image resolution of the objects. This would also all need to take place in real time, or as quickly as possible. Autonomous vehicle companies are working hard to perfect the vehicle’s perception of the world around it and developing these “sensor fusion” algorithms represents the best way to make this happen.

Looking at the Road Ahead

Ultimately, no sensor can work alone for an autonomous vehicle to drive safely. I asked a business consultant who advises new technology clients, including autonomous vehicle companies, whether cameras could replace LiDAR sensors for autonomous vehicles. He responded immediately, saying, “I am very skeptical about the Tesla technological focus on cameras and I know a lot of the people I’ve talked to in the industry are very skeptical about that technology as well.” He went on to say, “The cameras are constrained by a lot of weather conditions…For one of my studies I was looking into the camera route and the cameras just can’t take you to level five (fully autonomous) because you need a backup and you need a reserve systems for full security…It’s irresponsible tech to focus only on the camera side.”

The criticism that the cameras do not do well in many weather conditions was mentioned earlier by Soroush Salehian of Aeva. It is a common criticism from vehicle manufacturers. The key takeaway from this assessment is, while Professor X can have a short demo using cheap cameras, a vehicle relying primarily on cameras would not be able to drive autonomously at level four or five unless it included LiDAR sensors and the full “safety stack” of sensors.

The rate of new technology advancements for autonomous vehicles is impressive. After all, LiDAR sensors have only been used on vehicles for fifteen years. Researchers will continue to develop new and cheaper technology to make perception of the world around the vehicle better. However, removing sensors from the “safety stack” to decrease the price is not the answer for autonomous vehicle companies. The answer to reducing costs while maintaining the safety of the vehicles will be in creating new business and financial models for them.

There appears to be no easy solution to this problem. Companies must focus on combining sensor data in “sensor fusion” to take advantage of the strengths of all the different sensors and minimize their weaknesses. Joint financing, having third party companies run tests of the vehicle’s autonomous driving “brain” to reduce R&D costs, and using autonomous vehicles for shared “robotaxi” fleets are ways to reduce the costs of these vehicles. This book explores several of these new business models. How they develop will be critical for the success of autonomous vehicles in both the US and China.

Next article I will discuss the basics of automation and how the vehicle uses this sensor data to perceive the world by creating a deep learning neural network system. The automated driving system can best be described by comparing it to the human brain. After all, the goal of machine learning is to replicate how the human brain functions by repeating a series of driving tasks many times to make these tasks run smoother and faster to drive safely.

Please join me Friday February 12 from 5:30–7pm EST for my book launch party for my book The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car. There will be a Q&A about the content of my book, the book writing process for the entire writing journey, prize giveaways, and enjoy a general festive, party atmosphere! This is also Lunar New Year! So Happy New Year everyone!

Here is the event information from Eventbrite:

--

--

--

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +756K followers.

Recommended from Medium

iPhone 12. Really?

The WWDC Features That I Can’t Wait to Start Using and Why

JioPhone Next Specs Teased: All You Wish to Know

JioPhone Subsequent Specs Teased: All You Wish to Know

Moolban Gogiban Winter Fishing Discount Coupon Event.

What exactly is the Internet of Things (IoT)?

The Customer Identity Infrastructure that Cruise Line Passengers Don’t See | LoginRadius

Digital Culture and the Coronavirus

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Phillip Wilcox

Phillip Wilcox

More from Medium

The Future of Autonomous Aviation: Robert Rose, CEO and Co-Founder of Reliable Robotics

I don’t wanna know you’re a dog…

A cute dog in denim

Trust Fabrics at the Edge with Zededa

SpaceX vs Blue Origin: The Uncharted Workforce Multiverse