Another precursor to TensorFlow, with CUDA and iPython notebook.

I managed to get TensorFlow running on the cloud (an AWS g2.8xlarg instance running Ubuntu 14.04). It had completely slipped my mind that one of the things that excited me about TensorFlow is that it abstracts the computational layer into ‘sessions’. This basically means that as long as you have the appropriate drivers installed you’re good to go. TensorFlow sort of just figures out what architecture you’re working with and makes the most of it. At least that’s how it seems thus far — I’m sure there’s plenty of scope for modifications to suit individual needs but I won’t know that for sure until I dive into the source.

iPython (Jupyter really) notebook — client-side interface that let’s you run your your code from the comfort of your web browser.

Anyway, what I did today really was just to install TensorFlow on an AWS instance, and after running one of the convolutional net demos and seeing how fast the training was taking place I thought I’d try it out on my simple case study of the Mandelbrot. Needless to say, it put my 2.9 GHz Intel Core i5 MBPro to shame. I’ll go into greater detail another time. For now here’s another pretty picture but considerably more detailed now

You can try out the setup by starting a GPU instance using the “TensorFlow GPU” AMI — look for it in the community AMIs.

Also, here’s a quick step by step guide to get iPython notebook running on your server. https://gist.github.com/randyzwitch/7590335#file-ipython-notebook-ec2-py