Why did Germany and Mercedes-Benz Give the Green Light for L3 Autonomous Driving Together?-Part2
Today, the level of autonomous driving has become a widely accepted statement in a series of discussions. Germany and China have basically accepted the five-level classification standard of the American Society of Automotive Engineers (SAE) for intelligent driving.
The reason that the license granted to Mercedes-Benz is regarded as a historical moment is that the L3 intelligent driving function is able to achieve automatic driving in a strict sense. However, for rigorous considerations, countries worldwide, including China, regard autonomous driving as a conditional driving assistance.
A relatively vivid and easy-to-understand description of the L3 level is that at this stage, you can completely drive the car without having to put your hands on the steering wheel at any time.
It means that if an accident occurs while the automatic driving function is on, Mercedes-Benz shall bear legal responsibility.
Therefore, we understand that the German regulatory authorities have the courage to legalize the code of conduct, and Mercedes-Benz dares to take responsibility for autonomous driving accidents.
But both also set up exemption clauses for themselves. That is “conditional autonomous driving.”
At present, there are three main things we can see:
1. Speed limit, the current maximum speed limit is 60 km/h. If this speed is exceeded, the driver will still be liable in the case of an accident.
2. Scenario limitation: 13,191 kilometers of highways or congested roads with high traffic density throughout Germany.
3. No sleep, no continuous looking backward or leaving the driver’s seat, and still be prepared to take over the vehicle at any time.
Is it a bit unpleasant to see these three conditions?
Germany’s current conditions for L2 partial autopilot functions are very relaxed, drivers are allowed to “run” on the road at speeds up to 210 km/h.
If this is true, it means that as long as the driver puts his hands on the steering wheel, he can adjust the speed of the autonomous vehicle at will. However, once the speed exceeds 60 kilometers per hour, Mercedes-Benz has the exemption conditions. From this point of view, there are certain ambiguities.
It reminds us of a topic that has been controversial in the industry for many years — whether the L3 level has practical significance.
If it is necessary to retain this technological transition phase in the passenger car sector, it may cause more traffic accidents.
The biggest risk is how the human driver can quickly enter the state to take over the vehicle in an emergency because it takes time for a human driver to go back to a highly focused driving state.
We can also understand why the automatic driving speed must not be higher than 60 kilometers, which gives humans the reaction time to take over the vehicle.
At present, the most intuitive understanding of L3 level autonomous driving on the road is that it is no longer a dangerous driving behavior to use or watch a mobile phone while driving, and there will be no penalties and fines for being photographed. It is the green light given by Germany, which is undoubtedly a boon for car owners who can’t get rid of looking at their mobile phones.
Data is meaningful only if it is well labeled
The mainstream algorithm model of autonomous driving is mainly based on supervised deep learning. It is an algorithm model that derives the functional relationship between known variables and dependent variables. A large amount of structured labeled data is required to train and tune the model.
On this basis, if you want to make self-driving cars more “intelligent”, and form a closed loop of the business model for self-driving applications that can be replicated in different vertical landing scenarios, the model needs to be supported by massive and high-quality real road data.
In the field of autonomous driving, data annotation scenes usually include changing lanes and overtaking, passing intersections, unprotected left and right turn without traffic light control, and some complex long-tail scenes such as vehicles running red lights, pedestrians crossing the road, and roadsides as well as illegally parked vehicles, etc.
The current artificial intelligence is also called data intelligence. At this stage of development, the more layers of the neural network, the larger amount of labeled data is needed.
A New Solution For the Self-Driving Data Annotation Project
ByteBridge, a human-powered and ML-powered data training platform provides high-quality services to collect and annotate different types of data such as text, image, audio, and video to accelerate the development of the machine learning industry.
- ML-assisted capacity can help reduce human errors by automatically pre-labeling
- The real-time QA and QC are integrated into the labeling workflow as the consensus mechanism is introduced to ensure accuracy
- Consensus — Assign the same task to several workers, and the correct answer is the one that comes back from the majority output
- All work results are completely screened and inspected by machines and the human workforce
In this way, ByteBridge can affirm our data acceptance and accuracy rate is over 98%.
Communication Cost Saving
On ByteBridge’s SaaS dashboard, developers can start the labeling projects by using the labeling instruction template and get the results back instantly.
From online setting labeling briefing to expert support alongside, the instruction communication is not that hard anymore.
3D Point Cloud Annotation Service
ByteBridge self-developed 3D Point Cloud labeling, quality inspection tool, and pre-labeling functions can complete high-quality and high-precision 3D point cloud annotation for 2D-3D fusion or 3D images provided by different manufacturers and equipment, and provide one-station management service of labeling, QA, and QC.
3D Point Cloud Annotation Types:
- Sensor Fusion Cuboids: 49 categories include car, truck, heavy vehicle, two-wheeled vehicle, pedestrian, etc.
- Sensor Fusion Segmentation: obstacles classification, different types of lanes differentiation
- Sensor Fusion Cuboids Tracking
① Tracking the same object with the same ID, labeling the leaving state;
② Time-aligned 2D images could be provided, point clouds outputs only.
Advantages of Our 3D Point Cloud Annotation Service:
· Support 2D/3D sensor fusion, support multiple cameras
· Support scalable data annotation
· AI-powered sensor fusion tool: labeling at 2X-5X speed
· Ease of use QC tool: real-time revision and synchronous feedback
A collaboration of the human-work force and AI algorithms ensure a 50% lower price compared to the conventional market.
If you need data labeling and collection services, please have a look at bytebridge.io, the clear pricing is available.
If you would like to have a look at the 3D point cloud live demo, please feel free to contact us: firstname.lastname@example.org