Linear Regression with Tensorflow

Jimmy Seow
2 min readOct 8, 2017

--

I’m learning linear regression and following this article https://medium.com/all-of-us-are-belong-to-machines/the-gentlest-introduction-to-tensorflow-248dc871a224

It is an excellent set of articles and everyone should read it.

The condensed code is

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# in real situation, should normalize data
x = tf.placeholder(tf.float32,[None,1]) # second parameter is the number of features
W = W = tf.Variable(tf.zeros([1,1]))
# first parameter is number of features, second is number of output

b = tf.Variable(tf.zeros([1])) # number of features
y = tf.matmul(x,W) + b
y_ = tf.placeholder(tf.float32,[None,1]) # second parameter is the number of outputs.
loss = tf.reduce_sum(tf.pow((y_-y),2))train_step = tf.train.GradientDescentOptimizer(0.0001).minimize(loss)init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
#x_data = np.random.rand(100,1) # using minibatch/batch
#y_data = x_data * 3.0 + 1.0
x_data = np.array([[i]]) # using stochastic gradient
y_data = np.array([[i*2]])
sess.run(train_step, feed_dict={x:x_data,y_:y_data})
if i % 100 == 0 :
print("i % d" % i)
print(sess.run(W))

If you want to use multifeatures, replace y, W, b, etc with the right side.

When using stochastic gradient descent, though, the output keeps printing out NaNs and also blows up. You need to play around with the learning rate, the number of epochs and maybe replace the GradientDescentOptimizer with another optimizer such as tf.train.AdagradientOptimizer, and tf.train.AdamOptimizer

--

--

Jimmy Seow
0 Followers

Working in Singapore and studying deep learning as a hobby.