Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Follow publication

Playing with TensorFlow

14 min readNov 26, 2020

--

Introduction

Concepts

An example of a multiple layer fully connected neural network. Source: Cyberbotics, via wiki.
An example of a more elaborate neural network. Source: MingxianLin, via wiki.

Objective and Approximation

Activation Functions

Structure of a single node. Source: Image by Author.
Some activation functions. Source: Image by Author.

Loss Function

Example of two parameters and the loss for each value. Source: Image by Author.

Back Propagation

Source: Image by Author.
The first line is the update equation. The next line is a breakdown of the the partial differential. Source: Image by Author.
Differentiation by substitution is used to determine how the activation changes relative to its inputs. Source: Image by Author.
Source: Image by Author.
Source: Image by Author.
The first equation is how the gradient of the quadratic loss changes with the data and activation for the output layer. The second equation specifies how the loss change for each layer using the result from the layer above. Source: Image by Author.

Fully Connected Network

docker run -p 6006:6006 -v `pwd`:/mnt/ml-mnist-examples -it tensorflow/tensorflow  bash
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
xtrain[entry][x_pos][y_pos]
entry going up to 60,000
x_pos/y_pos going up to 27 in each direction
y_train[entry]
entry going up to 60,000
tf.keras.layers.Flatten(input_shape=(28, 28))
tf.keras.layers.Dense(512, activation='relu')
tf.keras.layers.Dropout(0.2)
tf.keras.layers.Dense(10, activation='softmax')
Model summary for fully connected network. Source: Image by Author.
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,y=y_train,epochs=5,validation_data=(x_test, y_test),callbacks=[tensorboard_callback])
tensorboard --logdir logs --host 0.0.0.0

Results from TensorBoard

Loss and accuracy for training (Orange) and validation (Blue). Source: Image by Author.
The weights and bias for the first dense layer for each epoch. Source: Image by Author.
The weights and bias for the second dense layer for each epoch. Source: Image by Author.

Recurrent Neural Network

rnn_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
Model summary of RNN. Source: Image by Author.

Results from TensorBoard

Loss and accuracy for training (Orange) and validation (Blue). Source: Image by Author.
The weight and biases for the RNN layer. Source: Image by Author.
A look at the batch layer parameters. Source: Image by Author.

Long Short-Term Memory

Model summary LSTM. Source: Image by Author.

Results from TensorBoard

Loss and accuracy for training (Orange) and validation (Blue). Source: Image by Author.
Batch normalisation layer. Source: Image by Author.
All the weights and biases for LSTM layer. Source: Image by Author.
Weights and biases for output layer. Source: Image by Author.

Convolutional Network

x_train[image_num][x][y] 
x_train[image_num][x][y][0]
x_train[image_num][x][y][0]=blue
x_train[image_num][x][y][1]=green
etc…
Model summary of convolutional network. Source: Image by Author.
model.add(layers.Conv2D(32, (3, 3), use_bias=True, padding="SAME", activation='relu', input_shape=(28, 28, 1)))
padding=”SAME” 
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

Results from TensorBoard

Loss and accuracy for training (Orange) and validation (Blue). Source: Image by Author.
Weights and bias from first convolutional layer. Source: Image by Author.
Weights and bias from second convolutional layer. Source: Image by Author.
Weights and bias from third convolutional layer. Source: Image by Author.
Weights and bias from first dense layer. Source: Image by Author.
Weights and bias from second dense layer. Source: Image by Author.

Conclusion

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Alexander Morton
Alexander Morton

Written by Alexander Morton

A DevOps engineer specialised in cloud infrastructure with a background in theoretical and experimental physics.

No responses yet