The heart of drone

OSCAR Team
OSCAR
Published in
6 min readDec 13, 2018

The driver is sleeping, but the car is on the go. Scared? No, if the vehicle does not need to be controlled by driver.

The automotive industry is entering the revolution of self-driving. It is a solid fact that the unmanned vehicle can drive much safer than a human, being deprived of human factor. The wide distribution of autonomous cars will reduce the number of road accidents and save lives. And it will also save time that people spend on driving.

The research project StarLine open-source car is a technological platform that draws best engineering minds of Russia together in order to develop autonomous driving technologies. We called the platform OSCAR, which stands for open-source car and we are up to make every single line of code concerning the vehicle opened to community. And this post is starting a series of articles that will disclose secrets of self-driving cars.

StarLine open-source car

OSCAR platform anatomy

At StarLine, we always start with users. As an autonomous vehicle user, one would want to get from A to B safely, comfortably and on time. User stories are various, starting from going to work or shopping to spending time with family and friends without the need to watch the road.

StarLine OSCAR platform

So, upper level of the platform are user applications and environments. As there are going to be three groups of users — those who actually ride the car, commercial users and platform developers. The next level implies works on servers, including HD maps and simulation module. APIs, supporting the vehicle and serving web and mobile applications are also at this level. Software level is about creating programs which will be embedded on a car and ride in its trunk. And two lower levels of the platform are vehicle-related work, which implies investigating cars digital interface and installing its equipment.

Vehicle level

There is a communication network inside the car called The Controller Area Network or CAN. It is vehicle’s internal digital bus and all of car’s devices are connected to it in parallel to send and listen the data. Commands to devices also can be transferred via the CAN bus. Having access to it, one can power on engine, unlock doors, fold car’s mirrors using digital commands.

CAN bus and devices in the car

To control vehicle’s acceleration and steering we need to access the interface and figure out relevant package types. Our company is a manufacturer of electronic devices, so we developed our own CAN bus adapter. With its use we are now investigating the protocol. Currently we are able to accelerate, decelerate and steer using laptop.

Controlling the vehicle via its digital interface from laptop

Hardware level

The second working area is fitting the car out with sensors. There is number of approaches to equipping an unmanned vehicle. For example, some companies use lidars, while others refuse them, limiting themselves to readings of other sensors.

Equipping the first prototype

We are going to install several mono and binocular cameras, radars, lidars, and satellite navigation. Every device, from camera to RTK GNSS unit, needs tuning before it can be installed. By the moment this article is written, we are done with satellite navigation and computing unit which are necessary for our first prototype.

Software level

At software level we will follow the classical approach. It comprises five modules among which are perception, fusion, planning, localization and control.

Data from cameras is used for object detection. Radars use radio-waves to obtain information about obstacles and their radial velocities. lidars provide multiple individual distance measurements to objects of environment, supplying the car a so called point cloud. GNSS RTK module uses satellite data and allows to localize a vehicle with centimeter accuracy.

Lidar data example from https://velodynelidar.com

Then the data from first three sources is fused to derive information about dynamic obstacles which move around.

At the same time, both satellite and lidar readings are used to solve SLAM problem, which stands for simultaneous localization and mapping. This is an approach for creating a map of an unknown environment while keeping track of machine’s location within it. This information is used to calculate the 6 degrees of freedom pose, which includes three spatial coordinates and three-dimensional speed.

Planning the vehicle’s local trajectory is next. And the final step is control module which is used to actually execute the trajectory that was built while the path planning was done.

By now, we are busy with tuning and installing the equipment, which is the first step in the pipeline and we are also working on car’s controls.

Cloud

Server interiors will consist of four parts:

  • HD maps
  • telemetry API
  • simulation module
  • commands API

We will first need a storage for maps in order to localize the car properly and telemetry service to analyze data. This is our early server architecture layout which includes the two. Later on we will extend it with commands API so that are be able to send control signals to the vehicle. We will also add simulation module.

First server architecture layout

HD maps are maps which offer a complete centimeter accuracy depiction of the real world, including all things relevant to road navigation such as lanes, signs, positions of traffic lights and road boundaries. It can also contain lidar data to provide insight on what the world around is. High-definition maps are essential for self-driving.

We are now trying to contact commercial HD map providers to see if any of them have coverage in Russia.

Visualization of HD map https://from 360.here.com

Special simulators are needed to generate road situations for troubleshooting and testing. With virtual simulation, we will be able to increase robustness of our algorithms by testing on billions of miles of custom scenarios and rare cases, in a fraction of the time and cost it would take to do so on physical roads.

Carla simulator from https://carla.readthedocs.io

Roadmap

We have started in July with forming a concept, created a team and pushed ourselves towards the prototype A. The prototype A is a car that is capable of driving by waypoints, but requires the intervention of an operator to avoid obstacles. The second prototype, which we plan to build by the spring of 2019 is the car which is able to bypass obstacles without human control.

OSCAR project roadmap

So what is in the heart of a drone?

At the heart of the self-driving car are complex algorithms, data streams, high-performance interfaces and sophisticated equipment. And it is not only hardware and software that ride along with the car, but also server and client applications the vehicle needs. This heart is uncovered and we rely on the initiative of StarLine team as well as on the aid of open-source community.

But the main detail of our vehicle is our desire to make people’s lives safer and more comfortable. Technology is crucially important to us, but we believe that technological and scientific knowledge are there to serve people and that is where the true heart of our aspire rests. Our ambition is to bring closer the day when road safety would be provided by machinery and people would be free to devote more time to what is important to them. And if you share our attitude, we are delighted to invite you to join us in creating the technology of tomorrow.

--

--

OSCAR Team
OSCAR
Editor for

Open-source self-driving car. From Russia with love. Read more at smartcar.starline.ru