Kenichi Sasagawa
2 min readJun 1, 2019

Deep Learning with Elixir

An overview of Elixir-based Deep learning under development. The project name is called “Deep Pipe” (DP).

[Basic idea]
It will be implemented along two policies.

  1. Descrive the network in Elixir-style.
  2. Take advantage of parallelism, which is advantage of Elixir

[Network notation]
We describe the network as follows using the macros. This corresponds to MNIST.

f (m,n) is a CNN convolution filter. f (5,5) generates a 5 * 5 filter matrix. It generates random numbers of Gaussian distribution. Learning rate, multiple constant to initial random matrix and stradd can be omitted, in which case 0.1, 0.1, 1, respectively.

Flatten converts matrix data into row vector. It is a bridge from CNN to an ordinary network.

Although not shown in the example, there are pad (n) and pool (n). It is padding and pooling.

w (m,n) is a weight matrix. w (576,300) generates a 576 * 300 matrix. It is Gaussian distributed random numbers. You can optionally set the learning rate and a multiple of the initial random matrix. It is optional.

b (n) is a bias. Generate a row vector of element 0. The learning rate is optional.

relu, sigmoid,ident,softmax are activation functions.

[Parallel]
The matrix product calculation is in parallel with Spawn. Furthermore, backpropagation by parallel processing has been added. The usage rate will be about 80% with icore 5 CPU. We plan to incorporate Hastega as well.

[Gradient update]
In addition to normal sgd, it implements Momentum, AdaGrad, and Adam. We can now get up to 93% accuracy rate for MNIST with adagrad.

[Sample code]

[Specification]
I have written down the current specification in readme.txt published on GitHub.

https://github.com/sasagawa888/DeepLearning