# Getting Started with TensorFlow the Easy Way (Part 2)

*This is part 2 — “Variables and Placeholders in TensorFlow” of a series of articles on how to get started with TensorFlow.*

With deep learning frameworks popping up everywhere, most data scientists and researchers face the inevitable dilemma — which framework best suits their project?

TensorFlow is arguably the most preferred deep learning library for production, with a huge community support and 112,180 stars on its GitHub repository, all of within a span of 3 years.

Here’s a Google Trend of TensorFlow vs. Pytorch in the past 5 years:

However, the early adoption of TensorFlow by the data science community tends to be low because of its steep learning curve. The purpose of this series of articles is to make it easier for you to learn and use TensorFlow.

In this article (part 2 of the series)*, *we will be covering the important concepts of variables and placeholders which act as the very base of model building. By the end of this article you will be able to declare and run TensorFlow sessions with variables and placeholders.

Before we actually get into discussing these terms, let’s get a quick recap of what we covered in the previous article:

- TensorFlow works on computational graphs which are a set of nodes
- These nodes collectively form operational flow for the graph
- Each node is an operation with some inputs that supplies an output after execution

I encourage you to refer to *Part 1: Tensorflow Installation and Setup, Syntax, and Graphs** *for a detailed explanation.

# Variables

When we train a model, usually with scikit-learn, the weights and the biases get optimized when we call `model.fit()`

. But in TensorFlow, you need to build the whole machine learning model for yourself.

This is where **variables** come into play.

They are used to hold the values of the weights and the biases that are optimized during the model training process. However, the process is different from the traditional declaration of variables.

In TensorFlow, you have to initialize all the variables before you use them in your session.

Let us implement an example to build a better understanding of this. First import the TensorFlow library and then declare two variables:

import tensorflow as tffirst_var = tf.random_uniform((4,4),0,1)

second_var = tf.ones((4,4))

print(first_var)Out []: Tensor("random_uniform:0", shape=(4, 4), dtype=float32)

The variable `first_var`

is still a Tensor object and not a uniform array of shape `(4,4)`

.

`# Always initialize before running a session`

init = tf.global_variables_initializer()

Remember that this code block is very important and yet also easy to forget when you design a network in TensorFlow.

with tf.Session() as sess:

init.run()

print(first_var.eval())Out []: [[0.6423092 0.5614004 0.53549814 0.5330738 ]

[0.3521489 0.07537675 0.3189149 0.38606727]

[0.29591668 0.30730367 0.1751138 0.741724 ]

[0.48258722 0.33091295 0.5782666 0.7447115 ]]

Yay! Your first variable just got evaluated.

# Placeholders

As the name suggests, they are used to hold something in place. In our case, they are initially empty and are used to feed the training samples to the model. You need to set the datatype while declaring them.

`integer_placeholder = tf.placeholder(tf.int32)`

float_placeholder = tf.placeholder(tf.float64)

Usually the shape of a placeholder is`(None, no_of_features)`

. It might be confusing if you are seeing it for the first time. But the `None`

actually makes sense as its the dimensional value of the number of training instances we pass to our model. Our model should take any number of instances to get trained and this is not a fixed value.

The `no_of_features`

on the other hand is a known value and thus should be represented accordingly for your model.

`train_data = tf.placeholder(tf.float32, shape=(None,5))`

Note that the placeholder shape for the train_data should also be the same for the `test_data`

.

`test_data = tf.placeholder(tf.float32, shape=(None,5))`

## What datatype to choose?

Deciding on the datatype is crucial. It can affect both training time and the accuracy of the model. Usually, `float32`

is a safe data type for both performance and accuracy. I prefer `float64`

when accuracy and precision is of utmost importance.

# Implementing the Graphs and Placeholders

Let’s implement a dummy network by putting together the concepts of Graphs and Placeholders. We shall see how Variables play an important role in the next part of this series.

`import tensorflow as tf`

import numpy as np

np.random.seed(13)

tf.set_random_seed(13)

If you want to reproduce the same output, then please set the random seed to `13`

for both TensorFlow and numpy.

Let us make two numpy variables that act as dummy data and weights for our example case:

random_data = np.random.uniform(0,100,(5,5))

random_weights = np.random.uniform(0,100,(5,1))print(random_data)[[77.77024106 23.754122 82.42785327 96.5749198 97.26011139]

[45.34492474 60.90424628 77.55265146 64.16133448 72.20182295]

[ 3.50365241 29.84494709 5.85124919 85.70609426 37.28540279]

[67.98479516 25.62799493 34.75812152 0.94127701 35.83337827]

[94.90941817 21.78990091 31.93913664 91.7772386 3.19036664]]print(random_weights)[[ 6.5084537 ]

[62.98289991]

[87.38134433]

[ 0.87157323]

[74.6577237 ]]

Note that `print`

worked because these are numpy and not Tensor object type. In a traditional machine learning case, the weights are assigned randomly and optimized via theoptimizer, while minimizing the error with respect to the cost function. We will be covering that in the next article.

Now, let’s code our placeholders and operations that will hold our data while running the session.

a = tf.placeholder(tf.float32)

b = tf.placeholder(tf.float32)add_operation = a + b

multiply_operation = a * b

Luckily TensorFlow handles complex array operations with just +,-,* and / operators.

with tf.Session() as sess:

add_result = sess.run(add_operation,

feed_dict= {a:random_data,

b:random_weights})

mult_result = sess.run(multiply_operation,

feed_dict= {a:random_data,

b:random_weights}) print(add_result)

print('\n')

print(mult_result.round())[[ 84.278694 30.262575 88.93631 103.083374 103.76856 ]

[108.32782 123.887146 140.53555 127.144226 135.18472 ]

[ 90.885 117.226295 93.2326 173.08743 124.66675 ]

[ 68.85637 26.499567 35.629696 1.8128502 36.704952 ]

[169.56714 96.447624 106.59686 166.43497 77.84809 ]]

[[5.060e+02 1.550e+02 5.360e+02 6.290e+02 6.330e+02]

[2.856e+03 3.836e+03 4.884e+03 4.041e+03 4.547e+03]

[3.060e+02 2.608e+03 5.110e+02 7.489e+03 3.258e+03]

[5.900e+01 2.200e+01 3.000e+01 1.000e+00 3.100e+01]

[7.086e+03 1.627e+03 2.385e+03 6.852e+03 2.380e+02]]

As you can see above, the addition and the multiplication operations have been successfully performed. In this implementation, there is no use of variables because we haven’t implemented any cost function and optimization.

In the next article, we will cover how to code a complete Linear Regression example in vanilla TensorFlow.

Feel free to reach out to me if you have any query implementing the above code in TensorFlow. Ensure you follow Analytics Vidhya and stay tuned for the upcoming parts of this series.

Part 1: Tensorflow Installation and Setup, Syntax, and Graphs

*Part 2: **Variables and Placeholders in Tensorflow*

*Part 3: Implementing a Regression Example in Tensorflow (Next up)*

*Part 4: Implementing a Classification in Tensorflow (Coming soon..)*