[Tensorflow] CH1: Getting Started With Tensorflow

PJ Wang
CS Note
Published in
3 min readApr 16, 2018

What is Tensorflow

TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.

Create Tensor

Fixed Tensor

zero_tensor = tf.zeros([row, col])
ones_tensor = tf.ones([row, col])
fill_tensor = tf.fill([row, col], 42)
constants_tensor = tf.constant([1, 2, 3])

Similar Shape

zeros_similar = tf.zeros_like(constant_tensor)
ones_similar = tf.ones_like(constant_tensor)

Sequence Tensor

# [0, 0.5, 1]
linear_tsr = tf.linspace(start=0, stop=1, start=3)
# [6, 9, 12]
integer_seq_tsr = tf.range(start=6, limit=15, delta=3)

Random Tensors

# uniform distribution
randunif_tsr = tf.random_uniform([row_dim, col_dim], minval=0, maxval=1)
# normal distribution
randnorm_tsr = tf.random_normal([row_dim, col_dim], mean=0.0, stddev=1.0)
# normal distribution with mean and std
runcnorm_tsr = tf.truncated_normal([row_dim, col_dim], mean=0.0, stddev=1.0)
# randomizing entries
shuffled_output = tf.random_shuffle(input_tensor)
cropped_output = tf.random_crop(input_tensor, crop_size)
cropped_image = tf.random_crop(my_image, [height/2, width/2, 3])

Placeholders and Variables

Placeholders

If we want to insert the data to the model, create the tf.placeholder first.

x = tf.placeholder(tf.float32, shape=[2,2])

Variable

It’s different between constant and placeholder , the variable usually be a parameter we want to train in the model.

first_var = tf.Variable(tf.zeros([2,3]))

Matrices

# create diagonal matrix
identity_matrix = tf.diag([1.0, 1.0, 1.0])
# convert the numpy array to tensor
tensor = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]]))

Basic Operation

# Add
A + B
tf.add(A,B)
# Subtract
A - B
tf.subtract(A,B)
# Multiply
A * B
tf.multiply(A,B)
# Division
A / B
tf.div(A, B) # if A is integer, then return interger
tf.truediv(A,B) # wether A is integer, return the true value
tf.floordiv(A,B)
tf.mod(A,B)

Advance Operation

# Matrix product
tf.matmul(B, identity_matrix)
# Cross product
tf.cross(A,B)
# Transpose
tf.transpose(D)
# Determinant
tf.matrix_determinant(D)
# Inverse
tf.matrix_inverse(D)
# Cholesky decomposition
tf.cholesky(identity_matrix)
# Eigenvalues(first row) and eigenvectors (remaining vectors)
tf.self_adjoint_eig(D)

Other Math function

Implementing Activation Functions

Activation Functinos

Relu

max(0, x)

tf.nn.relu([-3., 3., 10.]     # [0. 3. 10.]

Relu6

min(max(0, x), 6)

tf.nn.relu([-3., 3., 10.]     # [0. 3. 6.]

Sigmoid

1 / (1 + exp(-x))

tf.nn.sigmoid([-1., 0., 1.]   # [ 0.26894143 0.5 0.7310586 ]

Tanh

(exp(x)−exp(-x))/(exp(x)+exp(-x))

tf.nn.sigmoid([-1., 0., 1.]   # [ -0.76159418 0. 0.76159418 ]

Softsign

x / (1 + abs(x))

tf.nn.softsign([-1., 0., 1.]   # [ -0.5 0. 0.5 ]

Softplus

log(1 + exp(x))

tf.nn.softsign([-1., 0., 1.]   # [ 0.31326166 0.69314718 1.31326163]

Exponential Linear Unit (ELU)

1 + exp(x) if x<0, else x.

tf.nn.elu([-1., 0., -1.]       # [-0.63212055 0. 1. ]

--

--

PJ Wang
CS Note

台大資工所碩畢 / 設計思考教練 / 系統思考顧問 / 資料科學家 / 新創 / 科技 + 商業 + 使用者