GPU On Keras and Tensorflow
Howdy curious folks!
Presenting this blog about how to use GPU on Keras and Tensorflow. If you aren’t much embraced with the GPU, I would recommend to have a quick check on a Crux of GPU.
Well, GPU which was earlier used in gaming now been in electric use for Machine Learning and Deep Learning. Neural Nets on Tensorflow or Keras are in mandate to use GPU. Also, it is surprised to note that these techs use whole GPU when got initialized. Hence, this may create problem for multi-user environment setup. No worries at all! This blog has its solution.
But before jumping to it, let’s understand on how to use on Tensorflow and Keras.
Tensorflow on GPU
Tensorflow automatically allocates whole GPU when got launched. This may lead to various problems.
Problem: We won’t get to know the actual GPU usage. A bit worrisome for multi-user environment setup and alarming situation when 1 GPU accessed by multi-users at a time.
Solution I:
This is applicable in case of multi-user environment. It is not possible to specify the exact required amount of GPU memory therefore allow_growth comes into picture. It allows runtime allocation of memory. Setting it to true means that it starts by allocating little memory and gradually allocates more regions as per the process requirements.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
OR
config = tf.ConfigProto()
tf.config.gpu.set_per_process_memory_growth = True
sess = tf.Session(config=config)
Solution II:
This solution can be applied when you are sure about memory consumption by your process. Instead of dynamic GPU allocation, a fixed memory allocation can be done by specifying the fraction of memory needed using set_per_process_memory_growth.
config = tf.ConfigProto()
config.gpu_options.set_per_process_memory_growth = 0.4
sess = tf.Session(config=config)
The above allocates fixed percent memory of each GPU. For instance, above is 40% allocated on each GPU.
Keras on GPU
In Keras, for the backends Tensorflow or CNTK, if any GPU is detected then code will automatically run on GPU while Theano backend needs a customized function. Keras documentation is a pretty awesome and reader-friendly. Visit the following documentation’s FAQ for more details.
https://keras.io/getting-started/faq/#how-can-i-run-a-keras-model-on-multiple-gpus
FAQ
Q. How to check which all GPUs are in use or GPU utilization by each process?
There is NVIDIA management and monitoring command line utility tool “nvidia-smi”.Visit explained output of nvidia-smi utility for more in detail.
Q. How to check if GPU exists in your system?
tf.test.is_gpu_available()
Q. How to list all available physical devices available to tensorflow?
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Q. How to list all available GPUs to tensorflow?
all_gpus = tf.config.experimental.list_physical_devices(“GPU”)
for gpu in all_gpus:
print(“Name:”, gpu.name, “Type:”, gpu.device_type)
OR
if tf.test.gpu_device_name():
print(‘Default GPU Device: {}’.format(tf.test.gpu_device_name()))
else:
print(“Please install GPU version of TF”)
Q. How to verify if GPU is available to Keras?
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
Thanks,
Happy Reading!
Can get in touch via LinkedIn.