Nerd For Tech
Published in

Nerd For Tech

Annotation Case Study-Lane Detection in Automatic Driving

From 2D cameras to 3D Lidar modeling, from passenger cars to trucks, AI scenarios of autonomous driving are being implemented at a high speed.

With the aid of high-quality labeled data, which are identifiable by ML, can assist driving technology, better perceive the actual road, vehicle location, and obstacle information, detect fatigue behaviors, and alarm road risks in real-time, achieving the goals such as auto-driving and automatic parking.

Now let’s have a look at a lane detection labeling case.

Data Annotation for Lane

1 Data Instructions

About the 2D images taken by the ego vehicle,

For single-frame images (non-continuous frames), tracking ID is not required.

For the consecutive frames, the lane ID should be unique.

2 Annotation Content

General attributes:

(1) Light

Daytime: street lights and most car lights are off in scenarios and the sky is pale blue or grey-white.

Night: Street lights and most car lights are on and the sky is dark blue or purple-black.

Unknown: images that cannot tell the time.

(2)Roads

Closed roads: highways, urban viaducts/loops, and other closed roads, including the ramps; Roads that do not allow pedestrians and non-motor vehicles to pass.

Intercity roads: intercity non-closed roads. There are cross sections and areas on both sides of the road for non-motor vehicles.

Urban roads: roads within the city, which generally are clearly divided into motor lanes, non-motor lanes, sidewalks, etc.; There is more urban architecture seen along the road.

Road without lane lines: there are no clear lane lines on the road, but there are areas for vehicles to pass.

Tunnel: the image is taken inside the tunnel. Or the roads where vehicles are about to enter the tunnel (tunnel entrance height occupies more than 1/3 of the image height).

Interior areas: such as closed parks, stations, gas stations, and garages.

Unknown: images that cannot tell the scenario.

(3) Weather

Sunny days: blue sky and white clouds can be seen; There are clear shadows of vehicle bottom, trees, and buildings on the road.

Overcast days: no rain/snow/fog, no direct sunlight; The shadows of vehicle bottom are blurred or barely visible on the road.

Rainy days: It is raining or there is apparent water on the road after the rain.

Snow day: It is snowing or there is apparent snow on the road after the snow.

Foggy days: fog can be seen, or targets in the midground are blurred.

Unknown: images that can’t tell the weather.

(4) Annotation Categories:

Dashed lines: include double dashed lines, each of which is labeled separately.

Solid lines: include double solid lines, each of which should be labeled separately.

Dashed line: lines in special shapes, such as variable guide lane lines, and deceleration lines.

Solid line: lines in special shapes, such as variable guide lane lines, and deceleration lines.

Curb: the outermost edge of the drivable area.

3 Labeling Requirements

  1. Lane positioning

(1) Single line: label along the center of the lane line;

(2) Double lines: label the center of the line near the ego vehicle’s lane;

(3) Lines in special shapes: label along the central line or the main line whose shape is similar to a single line;

(4) Curb: label along the boundary of the driving area;

(5) Polyline: for straight lines, only 2 points are needed to label; For curves, denser points are used at large curvature points, and sparser points are used at small curvature points.

2. Endpoint

(1) The endpoint is the point at which the lane converges at the far.

(2) At the end point of the image, the far end of the converging lane line should be within a circle with a diameter of 8 pixels.

(3) If it is a curve, it should be the convergence point of the lane at the far end, not the one extending directly along the lane nearby.

(4) When going uphill, the lane line only extends to the top of the slope, the annotation only needs to extend to the visible part.

3. Specification of the lane and the integrity of the lane

(1) From the endpoint, the line needs to extend to the lower edge or side edge of the image;

(2) When the endpoint is occluded by the target object, it is necessary to imagine the invisible point, so that the lower edge or side edge would not be vacant;

(3) If the lines are disconnected due to intersections, please predict reasonably and try to connect them into one line if possible;

(4) T-shaped lane can be formed by different lanes, but a crossover is not allowed(The same lane line cannot intersect itself to create a circle);

(5) When the middle of the lane is occluded by other targets (mainly vehicles), it is necessary to imagine and label the invisible part;

(6) Damaged and blurred lanes should be labeled with reasonable prediction;

(7) Road disconnection: if the road is disconnected in spatial structure, it should be divided into multiple lines.

4 Judgment of Lane

  1. Only label the lane lines of the drivable area;
  2. If there are old and new lanes, and they can be clearly distinguished, only label the new ones; If not, label all.
  3. The curb can be considered as the boundary of the drivable area. That is, all drivable areas should be enclosed by lane lines. If there is enough space for one vehicle between the outermost lane and the edge of the drivable area, the edge of the drivable area should extend to the curb.
  4. If the outermost lane is close to the edge of the driveable area, only label the outermost lane.
  5. Hybrid line: if there are multiple types, the type depends on the closet line.
  6. Slope & Fork Road Description:

For the top of the slope, the lane line only extends to the furthest end within the visible scope and does not need to converge to one point;

For the fork road, the labeling stops at the vanishing point of the lane line;

At the top of the uphill slope, the lane line can continue to the top of the slope and then stops.

You Configure and ByteBridge Annotates MANUALLY

Only Three Steps to go

  • Log in with your email
  • Upload the sample
  • Tell us what to label​: tell us the minimum labeling size and the precision you need.

You can send the requirement to us, and we will handle the configuration job.

Then it’s our turn.

Demo and quote would be ready in less than 24h on weekdays.

Output

ByteBridge Lane Annotation

JSON Output

ByteBridge Lane Annotation JSON

End

Outsource your data labeling tasks to ByteBridge, you can get high-quality ML training datasets cheaper and faster!

  • Free Trial Without Credit Card: you can get your sample result in a fast turnaround, check the output, and give feedback directly to our project manager.
  • 100% Human Validated
  • Transparent & Standard Pricing: clear pricing is available(labor cost included)

Why not have a try?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
ByteBridge

ByteBridge

160 Followers

Data labeling outsourced service: get your ML training datasets cheaper and faster!— https://bytebridge.io/#/