# What is Deep Neural Network Verification and Why is it Important?

# What is Deep Neural Network(DNN) verification?

DNN verification is a process of verifying whether, for every possible input, the neural network output satisfies the desired properties. Simply put, it’s checking for relations between the neural network input and output to see whether the specific properties between them of interest hold.

Using the notations of the above illustration, let’s assume X is the input set (the set of all possible inputs the neural net can receive), Y the output set (the set of outputs desired), and f the function that represents the composite of all the operations that goes on in the hidden layers.

`∀ x ∈ X => f(x) = y ∈ Y`

Then, the verification process is simply proving the above assertion holds.

# Why is it important?

It’s because with the use of neural networks becoming more and more prevalent, it’s beginning to be utilized more and more in safety-critical fields (e.g. autonomous driving cars) as well. With increased utilization has brought on two major needs that can only be satisfied by a formal DNN verification technique.

**The Need for 100% Assurance**

Like the above example, when applying DNNs to actual use and trying to solve real-world problems, the input set, the number of inputs the DNN can receive is basically infinite. (Even the most basic of computer vision task such as classifying a cat would mean the input can a picture of practically anything)

And due to the size of the input set, it’s impossible to test every input. Meaning no matter how much we test, with extensive testing, we can only guarantee a near 100% that the DNN will act as expected. That is why, by nature of the safety-critical fields, with possibly human lives at risk, the 100% assurance achievable with the verification process is more desirable and why the application of a verification process is crucial.

**The Need to Check Robustness Against Adversarial Attacks**

Another problem is that DNNs have been proven to be weak against adversarial attacks. Such vulnerability poses dilemmas not only for safety-critical fields but all fields dealing with DNNs. As can be seen from the below example, even the tiniest of perturbations resulted in a completely wrong result.

Checking DNN’s robustness against adversarial attacks with a 100% certainty with testing is likewise also impossible, as there is an infinite number of small perturbation/attacks to check for. While testing is impossible, with a verification process, we can check whether perturbations of limited degrees cause errors with certainty, which is better than nothing.

With the rise in the uses of DNN in safety-critical fields and the expectation that DNNs will be used in more and more domains in the future, coupled with the possible danger of adversarial attacks, DNN verification is an essential and important challenge to address.

**References**

[1] Algorithms for Verifying Deep Neural Networks. C. Liu et al, arXiv 2019

(Any comments or constructive criticisms are always welcome!)