The #paperoftheweek 7 is “DeepFault: Fault Localization for Deep Neural Networks?”

This research concerns the application of software fault localization testing to the performance of neural networks. Software fault localization is the process of isolating the statements, functions, or classes that are most likely to cause a program to encounter errors; the equivalent in neural networks is determining the hidden neurons that are most likely to cause the network to give an incorrect output. The authors propose several metrics that calculate how “suspicious” a given neuron is, based on how often the neuron is active when the network produces an incorrect output. By examining the several most suspicious neurons, they were able to modify images that were previously classified correctly in such a way that they triggered the suspicious neurons and were consequently classified incorrectly. This approach provides a promising method for identifying poorly-trained sections of a network, enabling them to be retrained in a targeted fashion to improve the overall accuracy of the network.

Abstract:

“ Deep Neural Networks (DNNs) are increasingly deployed in safety-critical applications including autonomous vehicles and medical diagnostics. To reduce the residual risk for unexpected DNN behaviour and provide evidence for their trustworthy operation, DNNs should be thoroughly tested. The DeepFault whitebox DNN testing approach presented in our paper addresses this challenge by employing suspiciousness measures inspired by fault localization to establish the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance. DeepFault also uses a suspiciousness-guided algorithm to synthesize new inputs, from correctly classified inputs, that increase the activation values of suspicious neurons. Our empirical evaluation on several DNN instances trained on MNIST and CIFAR-10 datasets shows that DeepFault is effective in identifying suspicious neurons. Also, the inputs synthesized by DeepFault closely resemble the original inputs,exercise the identified suspicious neurons and are highly adversarial.”

You can read the full article here.

About the author:

Jonathan Kleinfeld, Data and Software Engineering Intern at Brighter AI.