Self-Driving Cars

For Safety Reasons, Self-Driving Cars Must Not Miss Detecting the Signs

The cutting edge of false-negative detection research

Makoto TAKAMATSU
Towards AI
Published in
6 min readDec 13, 2020

--

IEEE/RSJ IROS 2019
Photo by David von Diemar on Unsplash

What stage of the automatic vehicle (AV) would you like to drive? How much fear do you have that a self-driving car will have an unexpected accident? Currently, safety concerns are the bottleneck for the widespread deployment of Level 2 self-driving cars with autonomy.

There is no guarantee that an object detection system will not make a mistake. No matter how much performance is improved, recognition accuracy will inevitably deteriorate due to various environmental factors. The authors argue that there must be a mechanism to detect that the detector has made a mistake while driving in an automated environment.

In this story, Did You Miss the Sign? A False Negative Alarm System for Traffic Sign Detectors, by the Australian Centre for Robotic Vision at the Queensland University of Technology, is presented. This is published as a technical paper of IEEE/RSJ IROS 2019. In this paper, an approach to identify traffic signs missed by object detectors is proposed. The system raises the alarm when an object detector misses a traffic sign. By training a false negative detector (FND) as well as a single-shot multi-box object detector to detect traffic signs, we can determine if the traffic sign detector has missed a sign or not. This paper's major contribution is that it is the first to propose a sign detector that focuses on false positives in the object recognition task.

Let’s see how they achieved that. I will explain only the essence of FND, so If you are interested in reading my blog, please click on FND paper.

What does this paper say?

For safety issues, object detectors need to work reliably in a variety of conditions. However, unknown environments, poor image quality due to bad weather, uneven lighting, and poor textures can affect object detection systems' performance degradation in AVs.

Given that the authors cannot guarantee that the object detection system will never make a mistake, they argue that there needs to be a mechanism to detect mistakes during deployment, i.e., a fault detection system that alerts the user when it determines that the performance of the deployed sign detector may be degrading. In other words, the authors’ research does not aim to improve the performance of sign detectors. Instead, the focus is on identifying when the system fails to identify a specific location traffic sign.

When should the user be notified of an alarm? When should control be handed over to the human? These questions can now be considered because we have an object detector and, at the same time, a system to check the detection performance of the detector. This system can alert the detector that it has most likely missed an object in a particular input image region. If alarms' frequency continues to increase, the autonomous system can ask the human user to intervene to take control.

Fig. 1 You and machines will miss the signs of bad weather.

Where is the novelty in this paper?

There are multiple approaches to the task of fault detection in vision systems, and one approach is to identify the fault by examining the output of the vision system. To identify faults, recently, it has been proposed to evaluate the predictive variance with uncertain measurements to reduce the system's perceived ambiguity as a barrier to using these systems in a real environment [Grimmett et al. 2013 and Triebel et al., 2016]. However, [Daftry et al., 2016] argue that predicting faults from raw sensor data is more effective than exploiting the uncertainty of model-based classifiers. This paper is unique in two ways: first, it does not use uncertainty measures; second, it is the first to propose false-negative detection as a failsafe mechanism for autonomous systems.

False Negative Detection (FND)

The approach proposed by the authors, False Negative Detection (FND), consists of two tasks.

1. Collect features from specific areas of an input image where TSD has not detected any sign.

2. Evaluate those features to identify false negative traffic signs from those areas.

Fig. 2 illustrates a false negative detector (FND) that identifies faults in a traffic sign detector (TSD).

https://arxiv.org/abs/1903.06391

The False-negative detector (FND) relies on the observation that when a traffic sign detector (TSD) misses a sign, its internal feature map still contains some exciting regions, some of which correspond to the location of the sign (Fig. 2(a) and (b)). Using this property, a classifier can be constructed to obtain features from these regions and determine if the TSD failed to detect signs in those regions. Here’s a note for everyone: there are two patterns that TSD misses. Since the TSD failed to detect the signal from here, this region will be called the failure. The other region is called impostor because it is excited but has nothing to do with the traffic sign. After binarizing the feature map in Fig. 2(b), Fig. 2(c) identifies the bounding box ([ x min, y min, x max, y max]) of each excitation region (Ri). Fig. 2(d) is the output of the FND showing the detected false-negative traffic signs.

https://arxiv.org/abs/1903.06391

In the training phase, the coordinates of the excitation region are transformed from feature space to image space, and the intersection point on the sum with the ground truth boundary box is measured. After extracting the feature vectors of failure and impostor, the Failure Detection Network (B) is trained to classify these two feature vectors, as in Fig. 3.

In the testing phase, the FND follows the feature extraction pipeline (Fig.2) to extract features from the inner layers of the TSD. Initially, the FND receives the detection output generated by the TSD and identifies the input image region without detection.

Results

Figure 8 shows the qualitative results of the false-negative detection system. Here several cases are shown where the FND successfully identified false-negative traffic signs that were missed by the detector. These sample results were obtained from three different environments (normal, simulated fog, and simulated rain) of the BTSD test data.

https://arxiv.org/abs/1903.06391

Reference

[Daftry et al., 2016] S. Daftry, S. Zeng, J. A. Bagnell, and M. Hebert, ``Introspective perception: Learning to predict failures in vision systems,’’ in 2016 IEEE/RSJ IROS.

[Grimmett et al., 2013] H. Grimmett, R. Paul, R. Triebel, and I. Posner, ``Knowing when we don’t know: Introspective Classification for mission-critical decision making,” in 2013 IEEE International Conference on Robotics and Automation.

[Triebel et al., 2016] R. Triebel, H. Grimmett, R. Paul, and I. Posner, ``Driven learning for driving: How introspection improves semantic mapping,” in Robotics Research. Springer.

[Rahman et al., 2019] Rahman, Q., Sünderhauf, N., & Dayoub, F, ``Did You Miss the Sign? A False Negative Alarm System for Traffic Sign Detectors,’’ in 2019 IEEE/RSJ IROS.

Past Paper Summary List

Deep Learning method

2020: [DCTNet]

Uncertainty Learning

2020: [DUL]

Anomaly Detection

2020: [FND]

One-Class Classification

2019: [DOC]

2020: [DROC]

Image Segmentation

2018: [UOLO]

2020: [ssCPCseg]

Image Clustering

2020: [DTC]

--

--