Using Tinn (Tiny Neural Network in C) with Python

A guide to calling Tinn, a tiny neural network in 200 lines of c, using Python with ctypes

Chris Knorowski
Towards Data Science

--

Image by Author

At SensiML we are focused on building machine learning tools that make it easy for developers to create and deploy trained models to embedded IoT devices. In this post, I’m going to show you how to turn Tinn (a tiny neural network written in standard C) into a shared library and then to call it from Python as if it were a native Python function. We use this method at SensiML to experiment with and build C libraries that target embedded devices while still natively using our Python data science toolkit.

Step 1. Go download Tinn

If you are going to work through this tutorial, you’ll need Tinn. You can download Tinn from GitHub.

If you have git installed,

git clone https://github.com/glouw/tinn.git

Alternatively, go to the site and download it by clicking on the clone or download it, then selecting the download zip file.

Step 2. Compile Tinn into a shared library

In order to call the Tinn functions, we have to create a shared library. I’ve created a makefile for compiling the Tinn library to a shared library, you can find it here. Replace the make file in the Tinn folder with the one provided, cd into the Tinn folder in the terminal, and run make.

cd tinnmake>>cc -std=c99 -fPIC -fno-builtin -Werror -I../include -I.   -c -o >>Tinn.o Tinn.c
>>making lib
>>ar rcs /Users/chrisknorowski/Software/tinn/libtinn.so Tinn.o
>>cc -shared -Wl,-o libtinn.so *.o

(Note: This was written for Linux/OSX if you are on windows you use the ubuntu bash shell to compile)

If everything works, a shared library file named libtinn.so should have been created in the directory. We will link to this shared library in the next step in order to call these functions directly.

Step 3. Create the Python Interface using cTypes

Lets first take a look at the Tinn.h file to see which functions we will need to call from python.

The first python interface we will create is the Tinn struct. We can do this using ctypes.Strucuture class. This structure has two properties that need to be filled in, __slots__ and _fields_. The __slots__ let us assign properties to the struct_tinn class. The _fields_ lets us describes which ctype to use for each variable.

Below you can see the python Tinn struct class, which I call struct_tinn. Using ctypes, we can specify all of the variable types in the struct_tinn, in this case, integers and pointers to arrays of floats.

Now that we have created a python representation of the Tinn struct, we need to import and create the python representations of the c functions we want to call. For this tutorial, they are the following

with these three functions, we will be able to initialize, train, and make predictions using the Tinn library. Next, we use ctyles.CDLL to import the shared library we created in step 2 of this tutorial.

For each of the functions we want to call, we have to specify the input and output types of the functions as python ctypes. We do this by setting the argtypes and restype properties of our cf_lib functions.

Step 4. Train a Tinn NN to recognize digits using Python

At this point, we have created all of the python wrapper objects we need to be able to call Tinn from python. In order to use the NN, we still need to write a initialize, train, and predict methods for python.

Let us start with a function to initialize a Tinn object with a set of parameters. We’ll call this init_tinn and pass the number of inputs, number of outputs, and number of hidden layers that we want in our NN.

Here you can see, we cast all of the inputs to ctypes before passing them to the xtbuild function. The Tinn object can now be initialized by calling our python function.

Tinn = init_tinn(64,10,128)

Because it is a python object, we can dynamically index into all of the Tinn attributes.

Tinn.weights[0]
>> 0.15275835990905762
Tinn.biases[0]
>> -0.30305397510528564

Next, let us build our train and predict functions. Our train function will take the Tinn object, an array of X digits, and y targets, as well as alpha the training step size.

The predict function will take the trained Tinn object, and an input vector to recognize, returning the predicted value with the highest confidence from the NN.

Finally, let us use our python Tinn functions in a standard data science workflow to identify handwritten digits. We’ll import the digit data from sklearn, then create a standard train test dataset with an 80/20 split. For fun, we loop over the learning rate as well to see what learning rate works best for this dataset. And finally, make some predictions to test the model accuracy.

alpha: 0.1
precision recall f1-score support

0 1.00 0.94 0.97 17
1 0.80 0.73 0.76 11
2 0.88 0.88 0.88 17
3 0.71 0.71 0.71 17
4 0.84 0.84 0.84 25
5 1.00 0.95 0.98 22
6 0.95 1.00 0.97 19
7 0.94 0.84 0.89 19
8 0.55 0.75 0.63 8
9 0.81 0.84 0.82 25

avg / total 0.87 0.86 0.86 180
Image by author

Wrap up

That wraps up this blog post. We’ve gone over how to create a shared c library, how to use ctypes to instantiates c structures and c functions from within python. Then we initialized and trained the Tinn NN model using our calls from the python shell. I hope you’ve enjoyed this tutorial, if you have any questions please feel free to drop me a line or comment below. If you are interested in building smart sensor algorithms for IoT devices where you want to run inference locally at the sensor, get in touch with us or check out our site to learn more.

--

--

CTO/Cofounder of SensiML. Works at the intersection of physics, software engineering and machine learning.