Run MNIST by GPU with MXNET

Let’s explore the world of deep learning by the first example MNIST problem. Here we pick MXNET framework to start.

Environment

OS: Ubuntu 14.04

CPU: Intel i7–3770

GPU: Nvidia GeForce GT 640

Disk: 1TB Sata

RAM: 16GB

Installed Tools

sudo apt-get update
sudo apt-get install -y build-essential git libblas-dev libopencv-dev

Install CUDA toolkit

Download the CUDA toolkit:

Then install the CUDA from .deb file

sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
sudo apt-get update
sudo apt-get install cuda

Since the CUDA toolkit includes a Nvidia driver, we can check the correctness by command nvidia-smi:

If the driver version doesn’t match, follow the next section to install the Nvidia driver manually.

Manually Install Nvidia driver (Option)

Fisrt we need to select the appropriate version of Nvidia driver, here our library is nvidia-375.

Press ctrl + alt + F1 to enter tty1 terminal, then turn off the X-server temporarily:

sudo service lightdm stop

Download and install the nvidia driver (depends on your hardware)

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-375

Restart X-server

sudo service lightdm restart

Download and Compile MXNET

Download the MXNET and copy the config.mk to root of project:

git clone --recursive https://github.com/dmlc/mxnet
cd mxnet
cp make/config.mk .

Edit the config.mk as follows, note that the /usr/local/cuda may be like /usr/local/cuda-7.5:

USE_CUDA = 1
USE_CUDA_PATH = /usr/local/cuda

Then compile the MXNET:

make -j4

Since we want to run the MNIST sample by python, we can install the mxnet as local library by

cd python
python setup.py install

Run MNIST Example

Run the default (CPU)version of MNIST example:

cd example/image-classification
python train_mnist.py

We can find the delay for one epoch is 3.197 seconds while running by CPU.

Now let’s try with GPU with the core-0:

cd example/image-classification
python train_mnist.py --gpus=0

You can find the time consumption is reduced slightly.