Neural Network in a Week

Makers Week 9: Practice Projects, Neural Networks, and Very Pretty Pixels

Naz M
9 min readMay 17, 2017
A ‘brainbow’ of neurones in the hippocampus of a mouse, made fluorescent (Lichtman, 2009)

Welcome to my blog about learning to code at Makers Academy. If you missed the last post, you can find it here.

A spot of introspection

At some point this week a sort of profound fatigue set in.

Chilled evenings and early nights just don’t seem to shake off the tiredness. I haven’t really had a day without coding since February, including weekends, and the accumulated mental exercise is hitting hard.

The problem is that I cannot stop. I tried to take the weekend off and ended up building a portfolio website without realising, in front of the TV, in my pyjamas. Every day this week I’ve been in the office till it closes. I no longer have the energy to see friends in the evenings or do anything other than lying on the sofa, horizontal, watching mild TV.

It all really affirms that I’m on the right track and that coding is for me. It’s like I’m trying so hard but I’m not even trying to try.

Practice projects

This week was a dress rehearsal for weeks 11 and 12. We had 4 days and 3 hours to crack out a minimum viable product (MVP) for any project we could dream up.

The idea-pitching process was pretty impressive. I loved the display of creativity and the breadth of topics that the cohort came up with. In total, we generated around 50 ideas.

Six were chosen, to be tackled in groups of 4:

  • A multiplayer online game similar to Snake
  • “Beetroute”, a web app using the Spotify API to generate a playlist of songs from a chosen country
  • “Invading the Invaders”, a project to hack into Space Invaders and mess with the code
  • An operating system built on a Raspberry Pi (!)
  • “Locate my Train”, a web app for people on trains, using geolocation data to find its estimated arrival time.
  • A neural network trained to identify trees by the shape of their leaves

We were told at the end of the week that these were the most ambitious set of practice projects ever taken on at Makers.

I was stoked to be working on the neural network, and also super excited that one of my ideas, “Beetroute”, was getting made by another group. To be honest, I would have happily worked on any of them.

I don’t currently have the energy to write about the results of everyone else’s projects, but they all achieved mighty impressive MVPs, dove into insanely foreign technologies (including Assembly, the language computers use to interact with binary), and the presentations at the end of the week were awesome.

My experiences are as follows…

What the deuce is a Neural Network anyway?

A neural network (or, if you’re a hip computer scientist, neural net) is a type of machine learning model inspired by the relationships between neurones in the human brain. Their potential is pretty astounding.

Right now neural networks like at the heart of a large amount of groundbreaking artificial intelligence. Self-driving cars and the language recognition ability of Amazon’s Alexa are two huge examples.

The essential idea is that you ‘train’ the network on a dataset with data and labels (often called ‘supervised learning’). By iterating through the dataset over and over, the network attempts to discern the relationships between the data and the labels, working out how wrong it was on each attempt and then adjusting it’s data-to-label algorithm accordingly.

The crazy thing is how good they can get at it and the variety of data you can train them with.

Here’s an awesome example of a neural net that the warlocks at Google trained on a ton of sci-fi books. It learnt that all the books have characters separated at regular intervals into words. It learnt that there is speech. It learnt grammar and punctuation. Eventually, it was trained well enough to produce an eerie screenplay with a very sci-fi-y feel.

And here it is, acted out to the word.

And just for good measure, a computer-written song based on the music of the John, Paul, George & Ringo. Like it or not, you can’t deny it’s Beatles-y.

Building one

The first serious bit of research involved attending a conference at London’s SkillsMatter (who host free tech talks all the time) with teammate Emily Chai. A lot of it presumed the audience had some familiarity with neural nets, but I felt I knew just enough to learn a bit. A recording of the talk is linked below.

After the talk we cornered the speaker, Willem Hendriks, to ask his advice for tackling a network in a week. He was super friendly and had buckets of time for us two deep learning newbies.

His main advice was to dive into the topic and get something working, then move on to tinkering with it and understanding the ins-and-outs. He freely admitted that everyone in Machine Learning has an exploratory approach to the subject, using trial-and-error in designing their own neural networks.

Once someone in the field discovers that, say, a particular number of convolutional layers organised in a particular way are good for a type of image recognition, everyone else starts copying them and riffing off that architecture until a better one comes along. There’s a huge amount of trial-and-error in configuring neural nets.

As a result, I felt like we should have no fear charging into the topic, as no-one really knows what they’re doing anyway.

“Sometimes science is more art than science. A lot of people don’t get that”

— Rick C-137

When it comes to optimising networks, the devil is in the details, and you can spend hours playing with the number of layers in your network and the number of ‘nodes’ in each layer. Here’s a code-less neural network playground where you can do exactly that. It’s quite mesmerising.

We then set aside a day and a half to learn as much as we could about neural nets and how to build them. The ‘Machine Learning is Fun’ series of articles by Adam Geitgey (linked below) is hands down the best introduction to the topic you’ll find.

It was great to have such a welcoming introduction to build our respective trees of knowledge, especially as so many machine learning resources out there seem crushingly heavy and intimidating.

Guided by Willem’s recommendation, we then moved on to learning about Tensorflow, Google’s neural network library for Python. That also meant we had to learn Python, but I figured that wouldn’t be a big deal.

All 4 of us tackled the installation and setup slowly, which turned out to be a good move in these unfamiliar waters.

After that was cracked some time was spent completing the excellent tutorials on the Tensorflow site to get an understanding of how to build a neural net.

By Wednesday evening we had a functional neural network for recognising handwritten digits using two technologies we’d not touched a few days before. It had one layer and its success rate was around 91%, which we learnt is pretty embarrassing in the world of neural nets.

Later, after learning how to implement convolutional layers (for taking snapshots of the constituent parts of the image), max-pooling (for passing on the most ‘interesting’ parts of each snapshot), and dropout (randomly leaving out data between iterations to reduce overfitting of the model), our success rate was up at a much more respectable 96.54% . Felt like Zeus.

Then, on Thursday, something magical happened.

Me and my pair partner for the day, Stephanie Crampin, were working on feeding our neural net a new single piece of data. We knew it worked on the testing set of handwritten digits that the Tensorflow tutorial had supplied us, but wanted to feed it our own stuff.

So we downloaded an image of a handwritten number, reduced it to the right size for our model, and got to work on doing some gymnastics with our new friend Python to turn the picture into an array of pixels which could be used with the weight that we had managed to extract from our neural net.

But it just wasn’t working, so I had the idea of trying to visualise our arrays of pixels and pixel weights to see if we were breaking down our data in the correct way.

We slaved for a couple more hours, trying to get Python to give us something image-related, before generating this.

It was a breakthrough, but it baffled the hell out of us. That was until we realised that we’d got our image generating function mixed up. So we struggled a little longer trying to fix that.

Then, all of a sudden, we got this, and I became the happiest man in East London.

Then Steph put colours on it, and I nearly exploded.

The reason I was so excited was that the above image represents what our algorithm has decided what a handwritten zero is. It doesn’t have any idea of the concept of a zero, or of a number, or of maths. It’s just learnt from the pictures it’s seen that a hit in the bluest pixels make a zero a likely outcome, and a hit in the reddest pixels make a zero an unlikely outcome.

And though the power of code, we extracted that computer-generated maths and spun it up into a computer-generated image.

I think we made something pretty beautiful.

We then tried to feed our network a dataset of leaves, but unfortunately, we ran out of time. It turns out preparing a dataset is not a task to be sneezed at. I’ll pop it on my ever-ballooning to-do list. We did, however, manage to feed the network our own single numbers, which it successfully managed to classify live in front of the rest of the cohort.

Overall a hugely fun and inspiring week, and I now feel acquainted with Python, which is a nice thing to have under the belt.

Here’s some more of our network’s dreamt-up numbers:

So dope.

TODAY’S JAM

Plantasia is this delightfully weird album created to be played to plants to help them grow. It has this 60s kids TV show/budget sci-fi vibe. I kind of love it. If you don’t love it, it’s because I’m more like a plant than you are.

If you read past the tapir, chances are you read the whole thing. If you enjoyed it, please click the little heart below. It’ll help others find this and enjoy it too.

--

--