AI and Tensorflow

Artificial intelligence has various implementations and applications, which makes it hard to understand which libraries or tools, are right for your application. Software development is continuing to adopt AI solutions for the business applications to handle companys’ data or desire to have new Ai features. Developers may need to understand basic concepts about AI to continue work for companies who demand these systems. AI is not a phasing trend it is a growing market in software development. Applications of AI are increasing as technology progresses. Developers may need to create AI systems that process big data for a company. Using systems to address how the massive amounts of data and sorted into relevant information. AI libraries have not given Developers the ability to create AI systems with the maximum amount of flexibility and control. Tensorflow is a great example of an AI Library that gives Developers the ability to develop and use components of AI in their information system. Tensorflow is an AI Library, which allows Developers to use pre-built models as well as tools to create custom models. In Tensorflow models or graphs is the process in which outputs are calculated from inputs. Predefined models within Tensorflow include various different AI systems: image, speech and text recognition. Developers can retrain these predefined models to fit a specific purpose or develop their own models. Tensorflow is a strong tool for developers to use in providing their customers or clients with the most adaptive and predictive solutions.

Getting started with Tensorflow is an easier way to get into developing AI systems because it allows developers to use less complex tools to achieve there goals. There are few requirements for developing on Tensorflow, which includes installing the programming front end. The most common programming front end for Tensorflow is python it has the most support on Tensorflow and many GitHub repositories. The architecture for Tensorflow allows for more direct interact with the hardware. The Tensorflow library rests on the hardware components of the system including but not limited to CPU and GPU. The architecture also allows Tensorflow to run on smaller platforms such as Raspberry Pi and mobile devices. The figure below illustrates the architecture of Tensorflow with the Python language or C++ language acting as a programming front end.

Installing Python and Tensorflow

For this demonstration, we will be using Python 3.5.2 as a front-end and developing on a windows system. Different versions of python may not fully support by Tensorflow nor have Tensorflow libraries/packages available, so it is important to download the correct version. Navigate to the Python downloads page and download the Python 3.5.2 Windows 64 bit .exe version or follow this direct link to the Python 3.5.2 Window 64-bit .exe download. (Windows x86–64 executable installer) The installation extension for this demo is .exe because it makes it easier to set-up the correct environment arguments for pip and installs the python files.

The downloaded file: python-3.5.2-amd64.exe

Execute this file.

During the installation process be sure to include Pip into the Windows system environment arguments. This is done by Selecting the checkbox “Add Python 3.5 to Path”.

Verify that selecting the checkbox has added Pip to the Windows system environment arguments by selecting “Customize installation” and viewing the screen to see that pip is checked as it is in the image below.

Proceed to the Next section of the installation. Ensure the option for “Add Python to environment variables” is selected then install Python 3.5.2 to a location on your PC.

Once python has been installed on your system, load up the command prompt. Run the following command to install Tensorflow into the python environment.

pip3 install — upgrade Tensorflow

(If this command did not successfully execute run the following command to upgrade Pip.)

pip3 install — upgrade pip

(If it does not work after executing the upgrade pip command, attempt to reinstall or modify python 3.5.2 installation as Python may have failed to add pip into your installation).

Test if the installation was successful by running the following command and code in the command prompt.


import Tensorflow as tf

hello = tf.constant(‘Hello, Tensorflow!’)

sess = tf.Session()


The output from this code should be the following:

Hello, Tensorflow!

Setting up MNIST Image Recognition Demo

The MNIST database contains images of handwritten digits ranging from 0 to 9 and files with labels that correspond to those images. This database is one of the commonly used tests for image recognition software as it has a large training data set of 6000 images as well as a large testing data set of 10000 images. These images and labels are stored in a file that is written in a serialized or binary format, so this means you cannot directly view each image like you would normal images in a file explorer. To view these images you must extract specific groups of integers or bytes from the aggregate file and plot these values onto the screen. Each image of a handwritten digit is 28 by 28 pixels. The code to view individual images in a file will be available at the end of the demo.

In the aggregate file of training or testing data, images when viewed look similar to the image below.

In this demo, you will learn the techniques and methods of building and train simple image recognition AI. This demo is intended to explain the concept of AI building, training and testing rather than teach you about the complex mathematics or algorithms behind building AI.

Download this MNIST database (All four files) from this link and place them in a directory(C:/tmp/Tensorflow/mnist/input_data/). If your directory is different than the one provided change the directory in the code. The first line in any Tensorflow application should be the import statements, in this case, the first import will be Tensorflow.

import Tensorflow as tf

The next line is importing a model or package responsible for reading data from an MNIST database. This package abstracts reading the images within the aggregate training and testing data files.

from Tensorflow.examples.tutorials.mnist import input_data

The next line of code reads in the MNIST database (All four files) and stores it in a variable named “mnist.” Replace directory if you have the data in a different location

mnist = input_data.read_data_sets(‘/tmp/Tensorflow/mnist/input_data/’, one_hot=True)

Next, add the following lines to set variables that the software will use to train the model. These numbers are a part of calculations that determine the amount of training as well as how fast the learning should be taking place.

learning_rate = 0.01

training_iteration = 30

batch_size = 100

display_step = 1

The next part of the code is where the inputs to this model get defined. As stated above the MNIST database has multiple images which have corresponding labels which are similar to having two inputs. In training AI you have an input, in this case, an image and you show the system with another input, in this case, a label, what is the desired outcome. For this system, we need two inputs or placeholder one for the image and one for the label. PlaceHolders are like variables in Tensorflow that hold data at a later time.

x = tf.placeholder(‘float’, [None, 784]) # Pixels in a image 28*28=784

y = tf.placeholder(‘float’, [None, 10]) # 0–9 digits

Weights in Tensorflow are probabilities that change the model during training to get closer to the correct output. Biases in Tensorflow help find the best line of regression for the given data set, so the model can produce better results.

W = tf.Variable(tf.zeros([784, 10])) # weights

b = tf.Variable(tf.zeros([10])) # biases

The following code defines a name scope that organizes the operations in a Tensorflow graph. The training model use in this code is Softmax Regression and in this model, it uses linear regression to find the line which best fits the image data provided.

with tf.name_scope(‘Wx_b’) as scope:

model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax Regression

The following code is to later view the distribution of the weights and biases in Tensorboard. Tensorboard is the visualization tool for Tensorflow AI graphs/models.

w_h = tf.summary.histogram(‘weights’, W)

b_h = tf.summary.histogram(‘biases’, b)

Minimizing the possible errors in AI systems is important, so creating a function within the model to calculate the error percent or cost is beneficial to training the model. The following code creates the cost function in the Tensorflow graph.

with tf.name_scope(‘cost_function’) as scope:

cost_function = -tf.reduce_sum(y*tf.log(model))

tf.summary.scalar(‘cost_function’, cost_function)

Improving and Learning are key factors of an AI system and this is what the following code does for the model.

with tf.name_scope(‘train’) as scope:

optimizer =tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

Methods for initializing variables and merging summaries are executed below.

init = tf.global_variables_initializer()

merged_summary_op = tf.summary.merge_all()

The following code will initialize a session of Tensorflow for using the model created above. A summary writer creates, so we can view the model through the training process later in Tensorboard. Code with the for-loop statements run the training data and calculates the cost or error percentage in the model. Once the training is complete the testing data is run and the results overall are shown as a prediction accuracy percentage as well as the first and last three predictions for images in the 10000 test image data set.

with tf.Session() as sess:

summary_writer = tf.summary.FileWriter(‘/tmp/Tensorflow/mnist/log/’, graph=sess.graph)

for iteration in range(training_iteration):

avg_cost = 0.

total_batch = int(mnist.train.num_examples/batch_size)

for i in range(total_batch):

batch_xs, batch_ys = mnist.train.next_batch(batch_size), feed_dict={x: batch_xs, y: batch_ys})

avg_cost +=, feed_dict={x: batch_xs, y: batch_ys})/total_batch

summary_str =, feed_dict={x: batch_xs, y: batch_ys})

summary_writer.add_summary(summary_str, iteration*total_batch + i)

if iteration % display_step == 0:

print(‘Iteration:’, ‘%04d’ % (iteration + 1), ‘Cost or Error=’, ‘{:.9f}’.format(avg_cost))

print(‘Training completed.’)


predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))

accuracy = tf.reduce_mean(tf.cast(predictions, ‘float’))

print(‘Test Accuracy:’, accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

tempPredictions = tf.argmax(model, 1)

print(‘Predictions on Test images:’,tempPredictions.eval(feed_dict={x: mnist.test.images}))

To view the model, examine the biases/weights, and analyze the cost function launch Tensorboard with the following command in the windows command prompt:

tensorboard — logdir /tmp/Tensorflow/mnist/log

Navigate to to see the model you create to understand handwritten digits. Navigate to different tabs to view the Scalars, Distributions, and Histograms.

If you do want to view the actual contents of the image data file, use the following code. There are a few steps before execution can be successful. First, unzip the .gz files (Keep both versions the unzipped and zipped). Check that the names of the location and files are the same as in the code. Then change the number in the code to match an index of the image you would like to view. Lastly, ensure you have matplotlib by running the following commands in command prompt.

pip3 install — upgrade matplotlib

import struct

import numpy as np

import matplotlib.pyplot as plt

def read_image(file_name, idx_image):

img_file = open(file_name,’r+b’)


magic_number =

magic_number = struct.unpack(‘>i’,magic_number)

print(‘Magic Number: ‘+str(magic_number[0]))

data_type =

data_type = struct.unpack(‘>i’,data_type)

print(‘Number of Images: ‘+str(data_type[0]))

dim =

dimr = struct.unpack(‘>i’,dim[0:4])

dimr = dimr[0]

print(‘Number of Rows: ‘+str(dimr))

dimc = struct.unpack(‘>i’,dim[4:])

dimc = dimc[0]

print(‘Number of Columns:’+str(dimc))

image = np.ndarray(shape=(dimr,dimc))*dimr*idx_image)

for row in range(dimr):

for col in range(dimc):

tmp_d =

tmp_d = struct.unpack(‘>B’,tmp_d)

image[row,col] = tmp_d[0]


return image

if __name__ == ‘__main__’:

#extract the files from the .gz using zip

#train-images.idx3-ubyte or t10k-images.idx3-ubyte

#alter number in the method below to a digit between 0 and 9999 for the testing data file

# and digit between 0 and 5999 for the training data file

image = read_image(‘/tmp/Tensorflow/mnist/input_data/t10k-images.idx3-ubyte’,0)

img_plot = plt.imshow(image,’Greys’)

Run the code from the .py file in the Python IDLE and view the image at that index.

I hope that this demonstration went well and you were able to learn more about AI and image recognition. There are many great demos in Tensorflow as well as many great resources and repositories you should check out.

Like what you read? Give Matt Davison a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.