Deep Dive on Apollo 2.5

Apollo Auto
Apollo Auto
Published in
8 min readApr 27, 2018

How the new release will put us on the highway at last— and drive development faster, more efficiently than ever.

The road to L5 is still a long one but the goal is closer than it was last week. The reason is Apollo 2.5, released publicly April 19 in Beijing.

Apollo 2.5 Release Event, April 19th, Beijing

Apollo 2.5 enables the next big step, geo-fenced autonomous highway driving. Plus, it has a host of new or upgraded features that will speed the pace of development while reducing barriers, hassles, and costs.

CiDi, a Changsha-based intelligent driving research institute, has integrated Apollo 2.5 into a truck that can run autonomously on geo-fenced highways.

Bottom line: new performance capability in the vehicle, with dramatic efficiency gains for Apollo developers and partners.

That is not a boast — it’s the new reality — and what’s rather amazing is how quickly all of this was put in place. Apollo 2.5 has arrived just four months after January’s 2.0, which itself was a major advance.

Participation from the Apollo community has been the driver. We listened to developer input, then added new features that were the most-requested and most-needed.

Let’s walk through these features with some guidance from Jinghao (Calvin) Miao, senior software architect for the Apollo Platform. Jinghao likes to start by deep-diving into three key areas. They are areas where Apollo 2.5 adds new highway-driving capabilities AND chops down barriers to fast, efficient development.

The three are: camera-based perception, real-time relative mapping, and high-speed planning and control.

Camera-based perception is now streamlined and enhanced in several ways.

Apollo 2.0 used a complex sensor configuration, with Velodyne 64 LiDAR, two wide-angle cameras, and a radar. Apollo 2.5 hones it down to one mono wide-angle and a radar. This cuts sensor costs by 90%.That huge saving, plus a high-efficiency perception algorithm, will help greatly in testing limited-area highway driving.

Furthermore, reducing the sensor load won’t reduce performance. Apollo 2.5 uses a Yolo-based, multitasking, deep learning neural network for obstacle and lane detection and classification. The new perception module can do a tremendous amount of post-processing tasks, which include figuring out an obstacle’s 3D properties, heading, and lane lines. Here’s how.

  • The perception input is video captured by the camera, which can be seen as a sequence of static 2D images, so the problem is how to calculate the 3D properties of obstacles from those. In our approach, we first use the deep-learning neural network to recognize obstacles in the 2D images — then we can get the bounding box and the observation angle of the obstacle. Then, we can reconstruct the 3D obstacle using a line-segment algorithm, camera ray and the camera’s calibration. The whole process can be done in less than 0.1 millisecond.
  • Now let’s look at the logic for lane post-processing. First, the deep-learning neural network scans each pixel of the 2D image to determine whether it belongs to a traffic lane, in order to generate a pixellated lane. Then, using connectivity analysis, we can connect neighboring lane-line pixels to connected lane-line segments. Polynomial fitting guarantees the smoothness of the lane lines. Based on the current vehicle’s location, we can derive the semantic meaning of the lane line: whether it’s a left, right, or neighboring lane’s line. The lane lines are then transferred to the vehicle coordinate system — for a use to be explained shortly — and sent to other modules that need them.

Generating a real-time relative map is the next major advance.

In Apollo 1.5 and 2.0, the planning and control modules relied on global localization and HD maps. HD maps have a rich set of map elements, to help plan autonomous driving in complex road conditions — but high-speed traffic scenarios are simpler than those on urban roads. So for Apollo 2.5 we decided to go with a dynamic real-time relative map.

This map is based on the vehicle coordinate system, with the origin point on the vehicle itself. As we all know, the traffic lane is a critical map component, to guarantee that a self-driving car will make reasonable driving decisions and do safe trajectory planning. In relative maps, traffic lane data is generated by camera-based perception of the lanes, and incorporates a cloud-based guidance line.

By using this method, we can create a relative map that matches the data format of HD maps, and updates based pre-recorded human driving trajectories and lane detection from camera-based perception.

Everyone knows the value of HD maps in autonomous driving. They’re highly accurate and suitable for any traffic scenarios. But the production costs are high and turnaround time is relatively long. Many developers have told us they were hampered by the lack of HD maps for testing in their geographic areas.

A relative map has lower accuracy and is limited to certain uses. But it has benefits that can’t be ignored — lower cost, faster turnaround — and, it’s much easier to make and it can achieve real-time updating. For anyone working on high-speed scenarios, Apollo 2.5’s relative maps will really help to speed up development and testing.

High/low speed unified planning and control is the third of Jinghao’s “big three” areas.

This is another key piece of the puzzle for highway driving. And because we’ve adopted a consistent map-data format in Apollo 2.5, the map engine API can use both types of maps, providing adequate info for the prediction, perception, and planning modules.

Therefore — without too much upgrading and modification — the downstream modules can support high-speed autonomous driving! Because of the unified architecture, you will be able to switching testing scenarios more easily.

Besides high-speed autonomous driving, we also added more traffic-sign labels to the planning module, such as for stop sign and “Keep Clear.”

… And there’s more! Five Cool Tools, in Brief.

This deep dive can’t turn into an all-day dive, so here are quick summaries of five tools that are either upgraded or newly added in Apollo 2.5.

We opened up Dockerfile, upgraded DreamView (our HMI tool), launched the Apollo Drive Event data collector and HD Map data collector, and made a big upgrade to the Apollo Simulation Platform. All five will give Apollo developers new capabilities and accelerate their work.

  • Docker provides a unified development environment. Now that we’ve opened the Dockerfile, Apollo’s Docker image is no longer a black box. Developers who are experienced with Docker and the system can modify a Docker file to configure the environment. We also provided many installation scripts for dependencies. Developers can add/delete needed dependency libraries, use their own needed versions of dependencies — even optimize the entire development environment on their own.
  • For the DreamView upgrade in Apollo 2.5, we’ve implemented more visualization tools. DreamView now supports Baidu Maps’ and Google Maps’ high-speed autonomous driving modes. It displays point-cloud and reflection maps, with more diverse map elements, and includes a camera monitoring view that allows you to replay the traffic scenarios if needed. The upgrade also provides a planning and control visualization tool. It should now be much easier to spot issues quickly and extract data from the logs.
  • Drive Event is a new data collection function in 2.5. It allows human annotations to data to help manage data processing and storage more efficiently. Drive Event Data Collector supports Apollo’s unified data interface and pipeline.
  • The other new data collector, based on Apollo’s HD Map data collector, should be great news to many of you. We’ve had a lot of feedback about developers lacking capability to create HD maps on their own. With the new collector, which also supports Apollo’s unified data access and interfaces, you don’t have to do it all on your own. Collect your data, upload it to our website, and we’ll help produce the HD map. You then just download the map when it’s ready.
  • We also upgraded our simulation platform on Azure. We included logsim from real road test, and provided more professional scoring tools. Being a part of ApolloScape, the sim platform lets you verify your algorithms.

New Reference Vehicle? No Problem.

Finally, we’ve heard from many people wanting to support a new or additional vehicle with Apollo. Here is a story from Jinghao Miao, illustrating how quick and easy it is:

“Our partner AutonomouStuff asked us to help them port Apollo 2.5 into their autonomous driving platform GEM. GEM is a small vehicle for transporting people and goods. We gladly accepted, and two of our engineers did the work in 3 days.

How can a new vehicle platform be supported in such a short time? This is what we did: We used an automatic tool that defines GEM CAN MESSAGE’s DBC file converted to Apollo’s Protocol C++ files, then used base controller to make a GEM controller, which will be used to manage various driving modes and fault handling. Then we registered GEM Controller under Controller Factory, and activated it by modified configuration files.”

“That was all we had to do. It is a very easy process. These things are made possible by Apollo’s flexible software architecture.”

Driving Forward …

With Apollo 2.5, much more will be possible in the months ahead.

What do you need to do? Want to do? Now is the time for more requests and feedback. Now is the time for new partners, too: Join us, if you haven’t already! Email our Community Manager Zhenni at wuzhenni01@baidu.com to get more information.

Access the new Apollo 2.5 code on our GitHub page now!

Let’s keep growing the Apollo community and put the pedal to the floor. The autonomous driving movement now has the most powerful open-development platform anyone has seen. Wherever you are in the world, we’re moving at China Speed.

And stay posted: Follow us on Twitter @ApolloPlatform and sign up for our newsletter below!

A special thanks to Jinghao Miao, Dong Li, Qi Luo and Liangliang Zhang from the Apollo Team who helped with technical content of this blog post!

--

--

Apollo Auto
Apollo Auto

Apollo Platform is Baidu’s open source autonomous driving platform. Build your autonomous driving projects with Apollo: github.com/apolloauto.