Nerd For Tech
Published in

Nerd For Tech

2D Object Detection Labeling Case Study in the Self-Driving Industry

Would you like to see the interesting transport of “an unmanned delivery vehicle delivering packages to you”, or prefer to try “an unmanned car taking you home”?

It will take a long time for unmanned vehicles to be implemented in a high-speed and complex environment. For large-scale unmanned trucks driving on ordinary roads, although the environment is slightly simpler, there are many safety factors to be considered.

General speaking, in a specific environment, for example, in a relatively independent space such as a warehouse, a cargo yard, a park, or a port, as the drivable areas are controllable, the low-speed unmanned vehicle has become one of the first places for commercial implementation.——A place where many autonomous driving companies are vying for.

Now let’s look at a data annotation project for unmanned logistics vehicles.

Obstacles Labeling Instruction

Applicable to unmanned logistics vehicles in any open environment.

1. Task Introduction

For unmanned logistics vehicles operating in parks or open environments, they rely on camera sensors to detect and identify obstacles in the scenario. Therefore, in order to train the obstacle detection model in the R&D process, the objetcts in the related scenarios need to be labeled as a priority. The targets always include vehicles, pedestrians, traffic signs (cones and no stop signs, etc.), and other categories.

2. The Basic Labeling Principles

For the 2D object detection annotation task, the objects in the image that are listed in the category need to be marked with a 2D bounding box to determine the locations. For obstacles that are occluded or truncated, labelers need to judge based on their visible proportion and imagine the invisible part. For pedestrians, labelers need to tag the postures (standing, sitting, squatting, or bending).

3. Specification of Obstacle Categories

For the labeling tasks of 2D obstacles, the main obstacle categories that need to be labeled are pedestrians, cyclists, motorcycles, bicycles, tricycles, small buses, large and medium-sized buses, trucks, intelligent logistics vehicles, engineering vehicles, trolleys, cones, relatively static obstacles, dynamic obstacles, lane.

Note 1: The definition of the lane: the driving area of motor vehicles. Crosswalks and bicycle lanes that meet the driving area of motor vehicles and turning belts for vehicles are all lanes; the open road is theoretically an accessible area for vehicles, But greenbelts and sidewalks are not lanes.

Note 2: Perspective, reflected obstacles, and obstacles on billboards need to be labeled as omission. For example, people seen through the car window (regardless of whether the window glass is rolled down), people in the mirror, and images of people on billboards.

Note 3: For some vehicles, a lot of things are towed on their latter part. Labelers need to mark two boxes for the obstacle in this case, a box for the vehicle and a box for the towed objects (choose the attribute category of towed objects as unknown immovable), and the ID of the two boxes should be the same.

4. Specification of the Labeling Attributes

2D box labeling rules over the attributes of occlusion and truncation:

The definition of occlusion: the object is in the image, but it is occluded by other objects.

The definition of truncation: part of the object is not in the image and beyond the boundary.

The definition of imagination: imagine and label the complete obstacle in the image.

Obstacles’ visible proportions for determining the attributes of occlusion and truncation are shown as follows.

The Attributes of Pedestrian Postures:

For pedestrians, there are three attributes of their posture, including standing, sitting, and squatting or bending.

Note 1: If the bike is occluded, the cyclist is labeled as a pedestrian.

Note 2: Pedestrians with only the head part visible due to occlusion or truncation are labeled as omission and those with the chest above are labeled as pedestrians.

You Configure and ByteBridge Annotates MANUALLY

Only Three Steps to go

  • Log in with your email
  • Upload the sample
  • Tell us what to label​: tell us the minimum labeling size and the precision you need.

You can send the requirement to us, and we will handle the configuration job.

Then it’s our turn.

Demo and quote would be ready in less than 24h on weekdays.


ByteBridge 2D Object Detection Annotation

JSON Output

ByteBridge 2D Object Detection Annotation JSON


Outsource your data labeling tasks to ByteBridge, you can get high-quality ML training datasets cheaper and faster!

  • Free Trial Without Credit Card: you can get your sample result in a fast turnaround, check the output, and give feedback directly to our project manager.
  • 100% Human Validated
  • Transparent & Standard Pricing: clear pricing is available(labor cost included)

Why not have a try?



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store



Data labeling outsourced service: get your ML training datasets cheaper and faster!—