What's new in TensorFlow 2.0? A quick Overview

Mubashar Nazar Awan
Nov 1 · 2 min read

TensorFlow 1.0 was released in 2015 and now earlier this year, Tensorflow has released its new version 2.0. This text will cover the basic differences in TensorFlow 1.0 with TensorFlow 2.0.

So first let’s know, how TensorFlow 1.0 works

TensorFlow is an open-source library where complex deep learning models can be built.

Before training model, an abstract graph is built which only contains the structure of the model

Placeholders are used to store data e.g image pixels etc.

x = tf.placeholder(shape=[None,1])

variables are the parameters whose values are learned through training the model

w = tf.variable([1,1],tf.float32)

To perform the operation between different layers we have add, multiply, subtract and sum operations

prod = tf.add(tf.multiply(X,W),b)

The above code will create a graph like this

But still, this is just an empty graph containing no information, we need to start the session and pass data to train our model

sess = tf.session()
init = tf.initialize_all_variables()
sess.run(init)
sess.run(cost, feed_dict={X:data, Y:labels})

But things are not done yet, to built efficient models you need to focus on the backpropagation. This will help to learn weights.

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name="optimizer").minimize(cost)

Ok so this was a quick go through of TensorFlow 1.0, but still there were some issues which lead to the development of TensorFlow 2.0

If you have worked in deep learning before then you might be familiar with Keras, its an open-source library whose backend engine is TensorFlow. Keras makes it very easy to build models, unlike TensorFlow.

Its was observed in the recent time that many developers community started using Keras more than TensorFlow, so developers of TensorFlow integrated the functionality of Keras into TensorFlow, which makes it very easy to use.

model = tf.keras.models.Sequential()
model.add(Flatten(input_shape=(IMAGE_HEIGHT,IMAGE_WIDTH)))
model.add(Dense(units=32, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])model.summary()

Now you don't need to create abstract graphs and run session to train model, We just need to call “ fit” function like Keras and other scikit-learn libraries

model.fit(x=trainData,
y=trainLabels,
epochs=2,
validation_data=(testData,testLabels))

TensorFlow 2.0 has made things so much easier and developers can now focus on other details of building a model instead of focusing on complex structure of TensorFlow 1.0

Ok, so this was just a quick go through TensorFlow and new things added in TensorFlow 2.0. Thanks for Reading.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade