M2M : August outlook

This month’s challenge is to develop a project for a science fair I plan on entering into. The project must make use of genetic algorithms or neural networks, and be as ambitious as the rest of my monthly challenges.

James O’Keefe
Chatbots bible and More.
3 min readAug 1, 2017

--

Firstly, this will be my first major software development project i’ll be doing through Colemak. So far I really enjoy typing using this layout (although I still make the occasional mistake) and appreciate its unique factor. Hopefully the rest of my skills learned during my M2M will be as helpful.

This month, my challenge is to prepare a project for my local science fair. In the sake of improving my personal Deep Learning skills, I’ve decided to take on a project dealing with neural networks. My goal is to create a software which can recognise 100+ different hand gestures, and translate them into English.

History behind the challenge

The first mention of sign language is by Socrates, in the 5th century B.C. He says “If we hadn’t a voice or a tongue, and wanted to express things to one another, wouldn’t we try to make signs by moving our hands, head, and the rest of our body, just as dumb people do at present?” Unfortunately, throughout Europe, deaf people were considered impossible to educate, and so minimal effort was made towards helping them communicate.

It wasn’t until the 17th century that full-blown sign languages started to originate. In 1720, the modern English sign language was invented. Later, French and American sign language were also founded.

Where will I get the datasets?

A list of all datasets can be found here. My first task will be to create a software to detect and translate ASL (American Sign Language). Once, my software can do this, for 200+ words at 97% accuracy, the month’s challenge will be considered met.

What’s the general action plan?

Firstly, I’m going to do some more research on different techniques for image recognition. I think I’m going to be using a Convolutional Neural Network, but i’ll have to confirm this. I’ve already completed Andrew N.G.’s (excellent) Machine Learning course, so I’m itching to put the lessons into practice.

Most of the heavy training for the model will be kept until I have my P.C. with its GTX 1070 GPU. My current laptop isn’t particularly powerful, so training may be a slow process.

What’s the next step?

Over the week, my first course of action is to lay-out the architecture and theory of how my neural network will work. The only other experience I have with image recognition is a personal project with the MNIST data-set, so this should help me advance my skills.

After that, I’ll download a data-set for ASL, and start to prepare that for training (pre-processing and the like). Once this is done, if I have any extra time before next Tuesday I’ll start programming the script, almost certainly in Python.

If you have any experience in this area, any tips & tricks you’d like to share, or just anything related to my M2M at all, I’d love for you to leave a comment below.

I look forward to writing my progress report by the end of the week!

--

--