2017 was the Year of Learning Machine Learning

One of my resolutions for the year of 2017 was to prioritize learning opportunities. Even if I was not faithful to this goal at all times, I end the year in position to say I have learned much more than previous years. And even better: it was fun!

From the topics that I took time to learn and improve, probably the one with most impact for me was Machine Learning.

An Artificial Neuron

I was not a complete stranger to the subject — I took classes at University and liked it. But I have to say that the scale and real world impact that I saw during my stay in San Francisco were well beyond what I would thought at first. It is kind of magical when you see something this revolutionary growing out of academy and being used to change lives.

So, I decided to learn it. I am still in the process, but nowadays I feel very comfortable using models and applying to my daily challenges. It took some effort, as learning is never easy, but I think that some of the choices I made were very successful in allowing me to get practical experience while still working on my day job (I help people build the Internet of Things).

Coursera

I attended to MOOCs before, and I knew the potential of learning something at my own pace. This time, I tried something different that could give me some extra motivation. I opted for a Verified Specialization on Coursera — specifically the Machine Learning Specialization by the University of Washington, taught by Emily Fox and Carlos Guestrin. I knew that the content was freely available, but paying for it helped me with the extra Motivation that I needed to keep the pace.

One of the reasons that I opted for this specialization instead of one of the many alternatives was the fact that I attended the SF Data Science Summit of 2016, sponsored by Turi (Guestrin’s company, later acquired by Apple) and I liked his approach on Machine Learning — very practical.

I was not disappointed. The Specialization did follow the same path. First came case study for a given technique with a real world problem, and then theory and practice. It is the way I believe that things should be taught — specially when one has limited time.

I highly recommend this specialization for new comers. You will only be disappointed if you expect lots of Deep Learning applications. This is not the focus. More on that later.

Kaggle

Kaggle was probably the most important learning tool during the past year. If you know at least a bit of Machine Learning and you have not competed on Kaggle yet, you should enter a competition right now. Really.

Even if the primary objective of Kaggle is being a competitive platform for industry professionals, the community tends more towards coopetition (partly cooperation, partly competition). Working on the same problem and at the same time with (and against!) some of the best minds in the field is a constant source of learning. The discussions can be very rich and one can learn a lot with code that competitors publish in the form of Kernels (Kernels are just notebooks, but hosted by Kaggle). It is a very generous community of professionals.

It is also a good way of getting some experience with real world problems and datasets — even if your code will not see production, it is a opportunity for experimentation with instant feedback.

In Person Deep Learning Classes

I had this amazing opportunity of attending a in-person course on Deep Learning with great teachers.

I worked with Neural Networks before and was half way through the course taught by Geoffrey Hinton on Coursera. This gave me good fundamentals, but I lacked practical experience on the hottest topics, like CNNs, RNNs, LSTMs, Auto Encoders and all the other things that the cool kids are using. Also, this is a topic that is not properly covered by the Machine Learning Specialization by the University of Washington on Coursera.

Being part of the in-company course taught by experienced teachers from NeuralMind was fundamental in getting more comfortable with the tools, understanding the potential and also the limitations of the technology. And by being in-person, I had plenty of opportunities of asking questions, discussing with other learners and getting practical advice.

Sharing

Maybe you are familiar with the so-called Feynman technique for learning. You are supposed to get a topic that you wish to learn, try to describe it in simple terms as if you were teaching someone else and, by seeing the gaps, improve your understanding.

What I tried to do is to take it a step further and actually try to teach someone else.

In a couple occasions I partnered with colleagues and gathered people interested in Machine Learning in order to provide them some basic tools so they could continue learning Machine Learning by themselves.

The path we chose was to introduce people new to the field to Kaggle competitions. This combined the three approaches that were successful in my own learning: a case-based approach (which is the general approach in the Specialization by the University of Washington), being part of a community in which learning is highly valued (which is the case for Kaggle) and being part of a in person class. All compressed in a single day.

What we do is choose a Kaggle competition and work with the group step by step: from fundamentals (what is Machine Learning? How to use scikit-learn?) to more intermediate topics (overfitting, cross validation), with submissions to get them used to the Kaggle environment and gather immediate feedback.

I think we were very successful with this approach. In every occasion, the newcomers were very happy on being able to create their first models and actually see the improvements by tuning hyperparameters, using better algorithms and so on. Some actually continued to learn by themselves — which was the initial goal.

And this works also for my own benefit. Every time I explain a concept to someone new to the field, eventual gaps in my own understanding become very clear and I know I need to take the time to work on them.

What comes next?

I still do not know what 2018 will throw at me. But I do know that I will Machine-Learn my way trough it. It has been a joyful ride so far, and I intend to keep it that way.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.