Analysing Machine Learning Models with Imandra

Lewis Hammond
Imandra Inc.
Published in
13 min readSep 26, 2019

--

The vast majority of work within formal methods (the area of computer science that reasons about hardware and software as mathematical objects in order to prove they have certain properties) has involved analysing models that are fully specified by the user. More and more, however, critical parts of algorithmic pipelines are constituted by models that are instead learnt from data using artificial intelligence (AI). The task of analysing these kinds of models presents fresh challenges for the formal methods community and has seen exciting progress in recent years. While scalability is still an important, open research problem — with state-of-the-art machine learning (ML) models often having millions of parameters —in this post we give an introduction to the paradigm by analysing two simple yet powerful learnt models using Imandra, a cloud-native automated reasoning engine bringing formal methods to the masses!

Introduction

Verifying properties of learnt models is a difficult task, but is becoming increasingly important in order to make sure that the AI systems using such models are safe, robust, and explainable. ML, and in particular deep learning, is a powerful technique that has seen unprecedented successes recently in a wide variety of tasks, often producing super-human performance. However, understanding the resulting models and forming guarantees of performance with respect to certain input or output conditions is often difficult. Imandra is a general-purpose automated reasoning engine offering a suite of tools that can be used to provide exactly these kinds of insights and guarantees for a wide range of algorithms.

Supervised learning is a subfield of ML that involves training a model using data made up of input-output pairs so that given a new, unseen input the model is able to produce an appropriate output prediction. In other words, the aim is to learn a model that approximates a function mapping inputs to outputs. To interact with Imandra we use the Imandra Modelling Language (IML), which is based on an augmented subset of the functional programming language OCaml, and can therefore reason about such functions in a particularly natural way.

To illustrate this approach we’ll be looking at examples from two of the most common tasks within supervised learning (and ML more generally): classification and regression. In particular, we’ll show how two of the most common kinds of model used to perform these tasks, random forests and neural networks, can be analysed using Imandra. For each task we’ll use a real-world benchmark dataset from the UCI Machine Learning Repository and create our models using Python with some standard ML libraries. You can find all of our code for both learning and analysing our models on GitHub, and there’s also a corresponding cloud-based Imandra Jupyter Notebook that you can try out for yourself.

Classification

In a classification task we want to learn to predict the label of a datapoint based on previous labelled data. In the classic Wisconsin Breast Cancer (Diagnostic) dataset the aim is to decide whether a cancer is benign or malignant based on the features of a sample of cell nuclei. In the dataset we have the following variables:

1. ID number
2. Diagnosis (malignant or benign)
3–32. Real values for the mean, standard error, and the ‘worst’ value for each cell nucleus’
a) Radius
b) Texture
c) Perimeter
d) Area
e) Smoothness
f) Compactness
g) Concavity
h) Concave points
i) Symmetry
j) Fractal dimension

As is standard practice we pre-process the data before learning. First we standardise each variable to have zero mean and unit variance, then remove all but one from groups of variables that are highly correlated, along with those that have low mutual information with respect to the target variable. The data is split into training (80%) and test (20%) sets and we use Scikit-Learn to learn a random forest of 3 decision trees of maximum depth 3. As this is a relatively straightforward problem even this simple model achieves a fairly high accuracy. Using a short Python script each tree is then converted to IML and can be reasoned about using Imandra.

An IML version of the random forest classifier

We can also create a custom input type in Imandra for our model, so that we can keep track of the different features of our data. However, remember that we processed our data before learning. To make things easier, we’ll add in a function applying these transformations to each input variable using some multiplicative and additive scaling values extracted during our data pre-processing stage. After that we can define a full end-to-end model which combines these pre-processing steps and the random forest.

The input type and pre-processing functions used to define our full model

As our IML model is fully executable we can query it, find counterexamples, prove properties, apply logical side-conditions, decompose its behaviour into regions, and more. As a quick sanity check to make sure everything is working, we can run a datum (which in this case happens to have the malignant label) from our dataset through the model as follows:

# rf_model {radius_mean = 17.99; 
compactness_mean = 0.2776;
concavity_mean = 0.3001;
radius_se = 1.095;
compactness_worst = 0.6656;
concavity_worst = 0.7119;
concave_points_worst = 0.7119};;
- : string = "malignant"

Looking good! We can also use Imandra to generate instances and counterexamples for us, potentially given logical side conditions that can be specified as functions outputting a Boolean value. If we do this naively then Imandra returns instances that, while correct, might be wildly different from any reasonable values we’d expect to see in real life. In general we often only care about the performance of our models when some reasonable bounds are placed on the input (for example, the mean radius can’t be negative, and if the values for this variable in our dataset range between 6.98 and 28.11 we wouldn’t really expect any value greater than, say, 35).

Using the description of each variable in the dataset information above we can form a condition describing valid and reasonable inputs to our model. In ML as a whole, we are typically only interested in the performance and quality of a model over some particular distribution of data, about which we often have prior beliefs.

Our condition for specifying valid inputs to the random forest
A custom pretty printer for generating decimal notation instances

This allows us to generate the following synthetic data point which our model would classify as benign. We can then access and compute with our instance x using the CX (counterexample) module and display the results in decimal (as opposed the standard fractional) notation using a custom pretty printer defined above:

# instance (fun x -> rf_model x = "benign" && is_valid_rf x);;- : rf_input -> bool = <fun>
Instance (after 0 steps, 0.021s):
let (x : rf_input) =
{radius_mean = (Real.mk_of_string
"28061993138323/5000000000000");
compactness_mean = (Real.mk_of_string
"8487976201/5000000000000");
concavity_mean = (Real.mk_of_string
"162320931909/2500000000000");
radius_se = (Real.mk_of_string
"40971/20000");
compactness_worst = (Real.mk_of_string
"321/5000");
concavity_worst = (Real.mk_of_string
"415811319441/25000000000000");
concave_points_worst = (Real.mk_of_string
"3877904791/781250000000")}
[✔] Instance found.
module CX : sig val x : rf_input end
# #install_printer pp_approx;;# CX.x;;- : rf_input = {radius_mean = 5.61239862766;
compactness_mean = 0.0016975952402;
concavity_mean = 0.0649283727636;
radius_se = 2.04855;
compactness_worst = 0.0642;
concavity_worst = 0.0166324527776;
concave_points_worst = 0.00496371813248}

We can also reason about our model in more interesting ways, such as checking the validity of certain constraints we want our model to satisfy. For example, if the surface of a cell nucleus has many, large concave sections, this is a particularly negative sign indicating that the cancer is likely to be malignant. We can use Imandra to easily verify that our model always classifies a (valid) sample of highly concave cells as malignant:

# verify (fun x ->
is_valid_rf x &&
x.concavity_mean >=. 0.4 &&
x.concavity_worst >=. 1.0 &&
x.concave_points_worst >=. 0.25 ==>
rf_model x = "malignant");;
- : rf_input -> bool = <fun>
[✓] Theorem proved.

Excellent, Imandra has proved for us that this property always holds, for all possible input values. Although our model is correct here, by tweaking some of the numbers in the statement we want to verify it isn’t hard to come up with ostensibly reasonable properties that our model fails to satisfy (we’ll see an example of this in the next section). Not only will Imandra tell us when a verification command fails, it also provides a particular instance of an input that violates the property. The instance is then automatically loaded into the state space and thus can be further inspected and reasoned about. This is one of the ways that Imandra can be used to help diagnose problems with models and to give insights into their behaviour.

The final feature we’ll look at in this section is region decomposition, a technique that Imandra uses to break up a potentially infinite input space into discrete regions over which the behaviour of the decomposed function is constant (for an introduction to this feature, take a look at one of our earlier posts). The nested if ... then ... else statements in how the trees (which make up the random forest) are defined mean that they are a prime candidate for this functionality. As well as a logical description of the regions, Imandra also produces an interactive Voronoi diagram to help visualise an algorithm’s behaviour. Have a look on the notebook to see them in action and simply click on a region to see its description!

Voronoi diagrams for our decomposition of the random forest ensemble (left) and the first tree (right)

We can also use side conditions on the region decomposition of our model using the ~assuming: flag. One application here is in simulating partial observability. For example, perhaps we know most of the measurements for a particular set of cells and we’d like to see how the classification of the input depends on the remaining features. Let’s imagine that we only have the concavity measurements for a particular patient’s cell sample and we’d like to see how the output of our model depends on the values of the other features.

A condition describing our partial observation

By specifying the known values in a side condition we can get an interactive visual tool (embedded in our Imandra Jupyter Notebook, or available via an automatically generated HTML link) that provides us with a finite set of possible outcomes and a logical, human-readable description of each one:

#install_printer Imandra_voronoi.Voronoi.print;;Decompose.top 
~ctx_asm_simp:true
~assuming:"partial_observation"
"rf_model" [@@program];;
9 regions computed.
- : Imandra_interactive.Decompose.t list =
Open:
file:////var/folders/l9/d5spnx9177v53nkbpjl4f0yr0000gs/T/voronoi_ee639a.html

Regression

In a regression task we want to learn to predict the value(s) of some variable(s) based on previous data. In the commonly used Forest Fires dataset the aim is to predict the area burnt by forest fires, in the northeast region of Portugal, by using meteorological and other data. This is a fairly difficult task and while the neural network below doesn’t achieve state-of-the-art performance, it’s enough to demonstrate how we can analyse relatively simple models of this kind in Imandra. In the dataset we have the following variables:

1. X-axis spatial coordinate (within the Montesinho park map)
2. Y-axis spatial coordinate (within the Montesinho park map)
3. Month
4. Day
5. FFMC index (from the FWI system)
6. DMC index (from the FWI system)
7. DC index (from the FWI system)
8. ISI index (from the FWI system)
9. Temperature
10. Relative percentage humidity
11. Wind speed
12. Rainfall
13. Area of forest burnt

We again pre-process the data before learning by first transforming the month and day variables into a numerical value and applying a sine transformation (so that similar times are close in value), as well as removing outliers and applying an approximate logarithmic transformation to the area variable (as recommended in the dataset description). Each variable is scaled to lie between 0 and 1, and those with high correlations and/or low mutual information with respect to the target variable are removed. We then split the data into training (80%) and test (20%) sets and use Keras to learn a simple feed-forward neural network with one (6 neuron) hidden layer, ReLU activation functions, and stochastic gradient descent to optimise the mean squared error. After saving our model as a .h5 file we use a short Python script to extract the network into an IML file and reason about it using Imandra.

An IML version of the neural network

As in our first example we can also define some custom data types, functions that replicate our pre/post-processing of the data, and a condition that describes valid inputs to the model based on the dataset description (plus some reasonable assumptions about Portugal’s climate).

Custom input types and functions for pre/post-processing
A condition defining valid inputs to our neural network

By generating instances using side conditions, such as the one below (where we require the output to be more than 20 hectares, the temperature to be 20 degrees celsius and the month to be May) we can again both query and compute with our model, reasoning about the results directly using the CX module. Note that here our previously installed pretty printer is converting values to decimal notation for us again:

# instance (fun x -> 
nn_model x >. 20.0 &&
x.temp = 20.0 &&
x.month = May &&
is_valid_nn x);;
- : nn_input -> bool = <fun>
Instance (after 0 steps, 0.103s):
let (x : nn_input) =
{month = May;
day = Mon;
dmc = 500.;
temp = 20.;
rh = 100.;
rain = (Real.mk_of_string
"24634017759143723418947963885307992/
1878541866480547748364634187934375")}
[✔] Instance found.
module CX : sig val x : nn_input end
# CX.x;;- : nn_input = {month = May;
day = Mon;
dmc = 500.;
temp = 20.;
rh = 100.;
rain = 13.1133717053}
# nn_model CX.x;;- : real = 20.0272890315

The kinds of analysis we can perform here are not dissimilar to what we looked at above with our classification task, so without wanting to repeat ourselves too much, we’ll conclude this section by illustrating Imandra’s ability to prove a couple of useful properties about our network. We can start by verifying a condition that we’d hope any reasonable predictive model for this task would obey, namely that the area outputted by the model is never negative:

# verify (fun x -> is_valid_nn x ==> nn_model x >=. 0.0);;- : nn_input -> bool = <fun>
[✓] Theorem proved.

Great, so we know that whatever happens, as long as our neural network receives a valid input it will never malfunction by outputting a negative number. Although this is a simple example, it’s exactly these kind of guarantees that are needed for the safe deployment of such models in the real world, where a network’s output could form part of, say, a control sequence for operating a driverless car.

Finally, we’ll try something slightly more ambitious and test a hypothesis. All other things remaining equal, we would expect that the higher the temperature, the larger the area of forest that would be burnt. Due to the imperfections in our model (because of limited data, stochasticity in training, the complicated patterns present in natural physical phenomena, and so on) this assertion is in fact easily falsifiable by Imandra, as can be seen here:

# verify (fun a b -> 
is_valid_nn a &&
is_valid_nn b &&
a.month = b.month &&
a.day = b.day &&
a.dmc = b.dmc &&
a.rh = b.rh &&
a.rain = b.rain &&
a.temp >=. b.temp ==>
nn_model a >=. nn_model b);;
- : nn_input -> nn_input -> bool = <fun>
Counterexample (after 0 steps, 0.116s):
let (a : nn_input) =
{month = Feb;
day = Wed;
dmc = (Real.mk_of_string
"2001302860578649047203412202395247295896300726349164954426900657936422485111743/26290894775182656579389311741592195964262695657047583625044304683319346328960");
temp = (Real.mk_of_string "91160069288673997541144963139828470662342220288373496574482010785769071655/16907327829699457607324316232535174253545141901638317443758395294739129472");
rh = (Real.mk_of_string "310987560530841206048248341173125577069685813208286530077844293829277108589275/5258178955036531315877862348318439192852539131409516725008860936663869265792");
rain = (Real.mk_of_string "75815863864250199885180688130868702048660206232272093463832717880470859243/24896680658316909639573211876507761329794219372204151160079833980416047660")}
let (b : nn_input) =
{month = Feb;
day = Wed;
dmc = (Real.mk_of_string "2001302860578649047203412202395247295896300726349164954426900657936422485111743/26290894775182656579389311741592195964262695657047583625044304683319346328960");
temp = 0.;
rh = (Real.mk_of_string "310987560530841206048248341173125577069685813208286530077844293829277108589275/5258178955036531315877862348318439192852539131409516725008860936663869265792");
rain = (Real.mk_of_string "75815863864250199885180688130868702048660206232272093463832717880470859243/24896680658316909639573211876507761329794219372204151160079833980416047660")}
[✗] Conjecture refuted.
module CX : sig val a : nn_input val b : nn_input end

That’s quite a lot of information, but we can easily dive in and inspect the key parts of the counterexamples that Imandra has generated for us. In particular, we know that the inputs will be the same in all but the temperature feature, so let’s take a look at those values first and then run each counterexample through the model to show that it really does invalidate our hypothesis:

# CX.a.temp;;- : real = 5.39174908104# CX.b.temp;;- : real = 0.# nn_model CX.a;;- : real = 1.47805400176# nn_model CX.b;;- : real = 1.79452234896

All is not lost, however! Although the network doesn’t satisfy our original verification statement we can restrict our setting in a sensible way in order to prove something slightly weaker:

  • There is very little data from winter months, and so the model is unlikely to generalise well here, hence we’ll only consider non-winter months
  • We’ll increase the tolerance in temperature to 10 degrees celsius
  • We’ll increase the tolerance in area burnt to 25 hectares
Our condition specifying winter months

With these allowances in place, Imandra gives us back a final proof:

# verify (fun a b -> 
is_valid_nn a &&
is_valid_nn b &&
a.month = b.month &&
not (winter a.month) &&
a.day = b.day &&
a.dmc = b.dmc &&
a.rh = b.rh &&
a.rain = b.rain &&
(a.temp -. 10.0) >=. b.temp ==>
(nn_model a +. 25.0) >=. nn_model b);;
- : nn_input -> nn_input -> bool = <fun>
[✓] Theorem proved.

Conclusion

Reasoning about models that have been learnt from data is just one of many ways that formal methods can be brought to bear upon ML. While this problem is certainly a hard one — most state-of-the-art approaches rely on highly specialised software, techniques, and assumptions that Imandra doesn’t (yet!) support — it’s one we’re interested in pursuing further, particularly when it comes to scalability. However there are also other exciting avenues in this area that we’re currently exploring, such as our recent post showing how region decomposition can be used to improve sample efficiency in deep reinforcement learning, for example.

If you’re interested in our work be sure to check our other notebooks, find out more and get email updates on our website, join the discussion on our Discord channel, and of course subscribe to our Medium publication!

References

  1. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete Problems in AI Safety. arXiv preprint arXiv:1606.06565. 2016.
  2. Ian Goodfellow and Nicolas Papernot. The Challenge of Verification and Testing of Machine Learning. CleverHans blog post (Accessed: 21.08.19). 2017.
  3. Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety Verification of Deep Neural Networks. In International Conference on Computer Aided Verification, pp. 3–29. 2017.
  4. Pushmeet Kohli, Krishnamurthy Dvijotham, Jonathan Uesato, and Sven Gowal. Identifying and Eliminating Bugs in Learned Predictive Models. DeepMind blog post (Accessed: 21.08.19). 2019.
  5. Daniel Selsam, Percy Liang, and David L Dill. Developing Bug-Free Machine Learning Systems with Formal Mathematics. arXiv preprint arXiv:1706.08605. 2017.
  6. Sanjit A Seshia, Dorsa Sadigh, and S Shankar Sastry. Towards Verified Artificial Intelligence. arXiv preprint arXiv:1606.08514. 2016.
  7. John Törnblom and Simin Nadjm-Tehrani. Formal Verification of Random Forests in Safety-Critical Applications. In International Workshop on Formal Techniques for Safety-Critical Systems, pp. 55–71. 2018.

--

--