Notes on using TensorFlow

Li Yin
Machine Learning for Li
4 min readMay 29, 2018

This is a personal notes on how to use TensorFlow on the server. A very good tutorial can be found here:

1. virtual environment

2. nvidia-smi that can show memory usage,

3. Prerequisties

Actual packages:

sudo pip install numpy
sudo pip install scipy
pip install Pillow
pip install --upgrade tensorflow / tensorflow-gpu

install jupyter

pip install --upgrade jupyter  #either python or python3

Go to the source to edice the .bashrc file

use locate jupyter to find the directory

vi ~/.bashrc
export PATH=$PAYTH:/home/liyin/.local/bin #for loading jupyter
source ~/.bashrc

Use jupyter remotely

jupyter notebook --no-browser --port=8889 #on the server side

The local machine needs to install Jupyter too! Use putty because my machine is windows, before all this make sure the python, pip is installed.

run as administrator by right-clicking the executable and selecting "Run  as Administrator". So, type "cmd" to the Start menu, right click  cmd.exe, and run it as administrator. #this command to get the root on windows

On the local machine, Set up the tunnel

Create a session in PuTTY and then select the Tunnels tab in the SSH section. In the Source port text box enter 3306. This is the port PuTTY will listen on on your local machine. It can be any standard Windows-permitted port. In the Destination field immediately below Source port enter 127.0.0.1:3306. This means, from the server, forward the connection to IP 127.0.0.1 port 3306. MySQL by default listens on port 3306 and we’re connecting directly back to the server itself, i.e. 127.0.0.1. Another common scenario is to connect with PuTTY to an outward-facing firewall and then your Destination might be the private IP address of the database server.

Replace 127.0.0.1:3306 to localhost:8889

Remember to hit the “ADD” button. Now we just go to localhost:8889 with browser then we can see our projects.

4. Unzip files

For tar.gz

To unpack a tar.gz file, you can use the tar command from the shell. Here’s an example:

tar -xzf rebol.tar.gz

Screen

It is a good practice to start screen sessions with descriptive names so you can easily remember which process is running in the session. To create a new session with a session name run the following command

screen -S namescreen
screen -ls #list all screens
screen

4. Detach from Linux Screen Command Session

To detach from the current screen session you can press ‘Ctrl-A‘ and ‘d‘ on your keyboard. All screen sessions will still be active and you can re-attach to them at any time later.

5. Reattach to Linux Screen Command

If you have detached from a session or your connection is interrupted for some reason, you can easily re-attach by executing the following command:

screen -r

If you have multiple screen sessions you can list them with ‘ls’

screen -lsThere are screens on:
7880.session (Detached)
7934.session2 (Detached)
7907.session1 (Detached)
3 Sockets in /var/run/screen/S-root.

In our example, we have three active screen sessions. So, if you want to restore the session ‘session2’ you can execute

screen -r 7934

or you can use the screen name

screen -r -S session2

6. Terminate Linux Screen Command Session

There are several ways to terminate the screen session. You can do it by pressing ‘Ctrl‘ + ‘d‘ on your keyboard or use the ‘exit’ command line command.

screen -X -S [session # you want to kill] quit #kill the detached screen

About tensorflow

print the value

import tensorflow as tf
sess = tf.InteractiveSession() # use this to start a default session
x = [[1.,2.,1.],[1.,1.,1.]] # a 2D matrix as input to softmax
y = tf.nn.softmax(x) # this is the softmax function
# you can have anything you like here
u = y.eval()
print(u)

optimizer

the difference between apply_gradients and minimize of optimizer in tensorflow. For example,

optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(cnn.loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)

and

optimizer = tf.train.AdamOptimizer(1e-3)
train_op = optimizer.minimize(cnn.loss, global_step=global_step)

You can easily know from the link : https://www.tensorflow.org/get_started/get_started (tf.train API part) that they actually do the same job. The difference it that: if you use the separated functions( tf.gradients, tf.apply_gradients), you can apply other mechanism between them, such as gradient clipping.

placeholder

we need a way to replace X and y at every iteration with the next mini-batch. The simplest way to do this is to use placeholder nodes. These nodes dont actually perform any computation, they just output the data you tell them to output at runtime. They are typically used to pass the training data to TensorFlow during training.

A=tf.placeholder(tf.float32, shape=(None,3)) #none means any number of rows, but need to be three columns for B
B=A+5
#now feed data
with tf.Session() as sess:
B_val_1=B.eval(feed_dict={A:[[1,2,3]]})

To visualize the latent variable Z

use

ls --full-time #to show the time stampe

FaceMatch

https://github.com/arunmandal53/facematch.git

--

--