Let’s explore the world of deep learning by the first example MNIST problem. Here we pick MXNET framework to start.


OS: Ubuntu 14.04

CPU: Intel i7–3770

GPU: Nvidia GeForce GT 640

Disk: 1TB Sata


Installed Tools

sudo apt-get update
sudo apt-get install -y build-essential git libblas-dev libopencv-dev

Install CUDA toolkit

Download the CUDA toolkit:

Then install the CUDA from .deb file

sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
sudo apt-get update
sudo apt-get install cuda

Since the CUDA toolkit includes a Nvidia driver, we can check the correctness by command nvidia-smi:

If the driver version doesn’t match, follow the next section to install the Nvidia driver manually.

Manually Install Nvidia driver (Option)

Fisrt we need to select the appropriate version of Nvidia driver, here our library is nvidia-375.

Press ctrl + alt + F1 to enter tty1 terminal, then turn off the X-server temporarily:

sudo service lightdm stop

Download and install the nvidia driver (depends on your hardware)

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-375

Restart X-server

sudo service lightdm restart

Download and Compile MXNET

Download the MXNET and copy the to root of project:

git clone --recursive
cd mxnet
cp make/ .

Edit the as follows, note that the /usr/local/cuda may be like /usr/local/cuda-7.5:

USE_CUDA_PATH = /usr/local/cuda

Then compile the MXNET:

make -j4

Since we want to run the MNIST sample by python, we can install the mxnet as local library by

cd python
python install

Run MNIST Example

Run the default (CPU)version of MNIST example:

cd example/image-classification

We can find the delay for one epoch is 3.197 seconds while running by CPU.

Now let’s try with GPU with the core-0:

cd example/image-classification
python --gpus=0

You can find the time consumption is reduced slightly.