Get started with AI and machine learning in 3 months

Image for post
Image for post
get started with machine learning

Machine learning and AI hype are here, it’s real, everybody is talking about AI and yet only a few understand what is actually happening and what is true and what is media hype. We live in a time of media manipulations, where if you are not educated in different fields you easily get manipulated. The AI hype part is often, let me debunk it, misleading as we are not yet near the AGI (artificial general intelligence) aka the singularity, even though many are trying to convince us that it’s already here. Deep learning is really good at pattern recognition and perception but not at all at cognition. Nonetheless, AI is here and it is bringing a huge amount of value to the society already.

So… I am starting this blog part of because a lot of people asked me how to start with AI and ML and get into the field and part of because I myself am fascinated with the AI and in general with intelligence as such, so here I am.

I will give you a 3-month program, which you can follow even if you are working full-time as I have been working for Microsoft in Microsoft Development Center Serbia, as a software engineer. And I will also give you some advice on where to go next!

But first, let me start slowly…

Short introduction

Let me start my first ever blog or better yet mlog, coz ml (you see what I did there?), by introducing myself. Even though I actually finished a bachelor in electronics during the studies I was always inclining towards software. Taking courses from the computer science module, creating small software projects out of fun, curiosity and also ambition.

The real change for me started when I went to Germany, Freiburg in the summer of 2017, to work there for 3 months as an Android developer. A couple of months earlier I already got the internship and I decided to start improving my developer skills in Android programming. I created a couple of cool small Android apps even published one on the Google Play Store.

Ever since I know myself I was always into self-education and I am a big fan of 3 months intervals. I think it’s a perfect time unit to get some solid foundation in anything you wish, be it software/ML or sports, languages you name it.

Anyways back to the story, during my Germany period, I got interested in algorithms and I wanted to land a job in some of the big tech companies like Microsoft, Google, Facebook, Dropbox, Palantir etc. So I started learning algorithms. I also used the 3-month scheme and eventually landed a job at Microsoft, where I started working (4 months ago) full-time, as a software engineer.

It’s an understatement to say I love it! I got a job in a team called Microsoft Cognition and it’s all about computer vision, machine learning, digital image processing, and software development. Sadly, I am not allowed to talk about all the super-cool projects we do there, but you got to take my word on it.

During that period where I tried to land a job at big tech companies and had a parallel effort of doing hackathons, datathons, going to meetups, learning from smart people from the industry, I applied for the machine learning summer camp organized by Microsoft people from Belgrade, Serbia and I passed the qualification round.

Back at the time, I didn’t really appreciate how good the summer camp actually is and how only 25 out of ~300 people pass the qualification rounds and get a chance to work with some of the smartest people you can find. Lecturers from Google’s DeepMind, Microsoft Research to name a few.

Note: The application for this year’s iteration of summer camp is now open if you are into ML and are able to come to Serbia this summer I strongly recommend applying: Petnica summer institute of ML (PSIML)

That was my first hand on experience with machine learning. I already had a decent background in digital image processing and some basic knowledge in computer vision.

I’ve also been to Brazil summer of 2018, right after the summer camp, did some internship there (awesome country!), got a call from Microsoft with a job offer and so I returned back to Serbia.

Since then I’ve been intensively studying machine learning and I will try to give you my best advice on how to gain a solid background in 3 month period.

In the meantime I am a part of the organization crew for the very ML summer camp I visited last summer and also a part of the organization of Microsoft’s internal ML course, where I recently had the honor to assist on a workshop which was held by one of my colleagues, Nikola Milosavljević, who received his Ph.D. from Stanford.

Enough about me, there’s a lot of work to do, let’s get started.

A short note before you start — I am still not an expert at Deep Learning. I have only started reading research papers and implementing my own projects. In this article, I am going to write about everything that I found helpful when I started.

Update April, 2020: A lot has happened since I last wrote this blog. It’s been hell of a learning journey! I’ll probably either write a second part or update this one. In the meantime I started my YouTube channel on AI (mostly focusing on computer vision — for now) so make sure to check it out:

Machine learning guide through the galaxy

What background do I need, before I start?

  1. a bit of calculus (don’t panic! only differential calculus/chain-rule)
  2. basic programming skills (Python)

What if I don’t have this skill?

Don’t get intimidated! You will mostly need to know what a vector is, what a matrix is and how to do matrix multiplication, simple stuff. For start.

So if you want to listen to my 3-month plan, just ignore the MIT course (it’s time-consuming) I linked to, at least for now… because linear algebra is definitely a good toolkit to have!

Calculus. The only thing you actually need to know is the chain rule and the concept of a derivative. Those are really important.

Again going through this should suffice, Essence of calculus (3Blue1Brown).

As far as Python goes, if you have any previous background in programming, where you developed the correct programming/algorithmic mindset and you understood the basic paradigms of programming (procedural/imperative, OOP and functional) you won’t need any ramp-up. Just use the good ol’ Stack Overflow on the fly.

Other useful knowledge, not a prerequisite though are: algorithms, probability theory, and basic data manipulation in Python.

I will assume that you satisfy all of the above, so you are all set to go!

I will split the program into 2 logical units:

  • the core unit, which will give you a solid ML foundation
  • the extras unit, which will make you really familiar with what is happening in the world of AI and put you on the right track to start developing your own ML projects and/or startups.

Core effort

course 1 — Machine Learning (estimated time: 1 month)

If you don’t have a broad overview of ML, and I assume you don’t as you are reading this blog post, start with this ML course on Coursera, offered by one of the best universities on the planet — Stanford, and held by one of the godfathers of AI — Andrew Ng:

The course lasts 11 weeks, I did it in less than a month with multiple efforts going aside from it, so I labeled it with an estimated time of 1 month.

Note — If you want to get a LinkedIn certificate you will have to pay 79$ for this course, you can also apply for financial aid if you cannot afford it. Though every resource is also available without paying the course, so you are good to go.

what will you learn?

You will get a really solid overview of what is out there in the world of ML.

Starting with some basics like what is ML, how it relates to AI and deep learning. Get some basics in linear algebra, learn what is gradient descent (basic optimization algorithm also used for training neural nets) and the difference between classification (output you are trying to predict is a discrete variable, is this a cat or a dog?) and regression (output is a continuous variable, like predict the price of a house). Learn what is a cost/objective function and basics like that.

Image for post
Image for post
A relation between AI, ML and deep learning

Learn about forward propagation and backprop algorithm, which are the core components for neural net training. Forward propagation is also used after the training phase when your model is fed the input and it’s trying to predict the output.

What are the problems that can occur during the model training? like bias and variance. In the image beneath, the model is trying to learn a discrimination curve in 2D space and your goal is to separate green crosses from the red circles. But you don’t want the rightmost scenario as there are some unseen examples that will happen in the future that will fall on the wrong side. And you also don’t want the leftmost scenario because it obviously doesn’t understand the data distribution. So the middle one is a sweet spot.

Image for post
Image for post

Learn the difference between supervised and unsupervised learning and learn different ML algorithms like: k-means clustering (example: cluster news articles into science ones, sports etc), PCA (project your data into lower dimension space with least error), SVM (type of a linear classifier), linear and logistic regression, stochastic gradient descent (you basically use single example for backprop instead of the whole dataset in batch gradient descent). Get some knowledge of debugging of ML algorithms and implement a simple neural network.

You will do some cool proof-of-concept projects like:

Supervised learning:

  1. Applying linear and logistic regression to a labeled dataset
  2. Hand-written digit recognition (a subset of MNIST dataset)
  3. SPAM classifier (Spam Assassin Public Corpus dataset)

Unsupervised learning:

  1. faces dataset image compression and visualize a dataset using PCA (labeled faces in the wild dataset)
  2. Image compression using the k-means clustering (less memory!)
  3. Recommender systems (think of Facebook’s news feed)

course 2 — Deep learning specialization (estimated time: 2 months)

This course will get you into deep learning, a special subset of ML techniques which uses neural networks with multiple layers (deep nets). This subset of machine learning brought huge value to the society, especially the supervised learning approach with deep neural networks as models.

Be it YouTube’s and Facebook’s recommender systems, Facebook’s face-tagging features, Tesla’s and Waymo’s self-driving cars, speech recognition systems you name it. They all use deep learning, often end-to-end deep learning approaches but the hand-engineering approach still has it’s place.

Note — If you want to get a LinkedIn certificate you will have to pay 49$/month for this course, you can also apply for financial aid if you cannot afford it. Though every resource is also available without paying the course, so you are good to go.

what will you learn?

This course actually consists of 5 sub-courses, let me tell you which knowledge you will have after you finish each one of them:

  1. Neural networks and Deep Learning

You will get some hands-on experience with Python and NumPy, Python’s fundamental package for scientific computing. It’s a great tool for data pre-processing, manipulating matrices etc.

You will create a simple image classification model, using a logistic regression method, which will help you classify images into ‘cat’ or ‘non-cat’ images (everybody needs that with all those cats floating around social networks!)

You will implement your first deep neural net from scratch and get an understanding of how things work “under the hood”. You will see how this model drastically improves the accuracy on the cat classification problem.

Word of advice: Spend some time to understand the back-propagation algorithm for batch gradient descent (you will learn what that is). Dedicate 1 whole day only for this, and try deriving out equations on a simple neural network say with 1 hidden layer. Use paper and pen. Make sure to derive equations which are in:

  • vectorized form
  • that holds true for the whole dataset and not only for a single example

Believe me, it will be worth it, once you understand it.

Are you already hyped!?

2. Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

This one is a mouthful. You will learn how to initialize your neural net’s weights, how to use L2-regularization as well as the dropout technique to avoid overfitting on the train/dev datasets.

You will learn about different optimization algorithms. You will basically create gradient descent on steroids. Learn what is RMSProp, gradient descent with momentum and the most powerful one — Adam optimization algorithm.

You will also learn how to split your dataset into mini-batches and use gradient descent on those mini-batches which will speed up the learning!

At the end of the course, you will dwell into TensorFlow — Google’s powerful deep learning framework, which you will use to create a simple vanilla neural network model which will help you classify the signs dataset:

Image for post
Image for post
examples from the signs dataset

Basically, you will be able to recognize what a person is showing with hands. Giving computers the ability to see, isn’t that awesome?

3. Structuring Machine Learning Projects

Learn how to make components in your pipeline as independent (orthogonal) as possible, meaning you can tweak only one of the components and not have it change the behavior of another component.

How it’s important to have a single number which tells you how good your model is and how to split your dataset into train/dev and test sets.

Good splitting is very important if you wish to know how good your model will perform once in production.

Also, carrying out an error analysis is important. By looking at the examples on which your model performs poorly you can improve it.

And learn about transfer learning, a really powerful idea which lets you take someone else’s network, trained on some related problem, and use it for your own problem. You will only need to fine-tune it on some significantly smaller dataset and your problem is solved!

Basically, this section gives you some engineering best practices which are really important once you actually start writing your own models.

4. Convolutional Neural Networks

They made a quantum leap in the world of computer vision in 2012. A so-called AlexNet left all of the other algorithms in the dust, winning the ImageNet competition that year.

You will learn how to implement them starting with pure NumPy and then use TensorFlow and Keras — another popular deep learning framework.

You will implement some state-of-the-art architectures like ResNet and learn about other architectures like Alex Net, Inception nets etc.

Also, learn what is neural style transfer! here is an example I created:

Image for post
Image for post
my neural transfer example

Face recognition, real-time object detection — YOLO algorithm, and other really cool applications!

5. Sequence Models

This family of models is really great for NLP (natural language processing) problems.

Implement unidirectional RNN’s (recurrent neural nets) directly in NumPy and demystify the model. Learn also about more advanced models like GRU’s (gated recurrent units) and LSTM’s (long short term memory).

Build a character level model which will help you invent new dinosaur names, or write Shakespeare-like sentences (though only at the syntactic level and not semantic), you will also generate your own jazz melodies by recognizing a pattern in a given dataset.

Learn about embeddings, machine translation (translating human languages) and my personal favorite trigger word detection (think of “Hello Google”, “Hey Alexa” etc).

I really like the concept of machine translation encoder-decoder models. Where you basically encode a sentence into a single vector (of which you can think as a thought) and then decode the thought using the other part of the network in another language!

Also, for quite some time already I was thinking about implementing some keyword spotter/detection model which I would use to turn my room lights on and off, as I am sometimes lazy… :)

At the end of this “core effort” section I have one more advice:

go through the courses in sequential order

You will thank me later, even though they say you can start with any course, from the sequence, you like, you’re actually building up the necessary knowledge going from course 1. through 5.

Extras unit

AGI playlist

Stephen Wolfram about Wolfram Alpha, Wolfram language and the limits of computation

Best minds in the world in one place, a series hosted by a great guy and MIT’s deep learning researcher-Lex Fridman. This one will help you get a solid grasp on where we currently are with AI and where we are heading. My top 3 videos, AI-focused are:

Aside from AI there are talks about robots, software, universe and consciousness and so on, but Lex Fridman really gave his best to focus it around AI. From this group my top 3 are:

“AI people” on Twitter

I would also suggest you follow some of the best people from the industry on Twitter. Twitter is a hub when it comes to AI-related topics. Pretty much all of the godfathers of AI are actively writing about modern breakthroughs.

This is probably the number #1 way to stay up to date with the field! aside from reading research papers…

Here are some people I follow:

  • Yann LeCun (until recently head of Facebook’s AI research — FAIR)
  • Andrej Karpathy (director of AI at Tesla)
  • Ian Goodfellow (inventor of GAN’s)
  • Andrew Ng (you know him if you went through my core effort part!)
  • Chris Olah
  • Jeff Dean
  • Jack Clark
  • Pieter Abbeel (deep reinforcement learning researcher)
  • Geoffrey Hinton (you must know this guy!)
  • Francois Chollet
  • Michael Nielsen

Note I wrote a short note next to the people I know more about (did some research). Nonetheless, all of these guys are top-level world researchers!

Good blogs

If there is one blog you should follow, I would suggest: Andrej Karpathy’s.

He has 2 blogs:

I especially liked his story: “Short Story on AI: A Cognitive Discontinuity”, where he extrapolates the potential of supervised learning in a really thrilling story!

Other great material

The material in this section is something I have not yet gone through all of it, but I know it’s a world-class content:

For the end, some great meme in video format ❤

Elon Musk smoking weed in a neural simulation, iteration 23

Also if you are planning to follow along with the deep learning specialization on Coursera I suggested, consider exploring more about the people appearing in “heroes of deep learning” videos.

I found YouTube works exceptionally well for these kinds of things, just search for them and usually the best content is the first one to appear.

Where to next?

My advice is:

focus on projects, writing code, reading and implementing research papers and iterating!

Machine learning is all about iteration. Quickly implement a model from a research paper for a problem you are passionate about and iterate on it! Or find a GitHub repo and use it as a starting point for your project.

If you don’t have any idea for a new project, try solving problems from Kaggle. Kaggle is an awesome data science/machine learning platform, where you can learn a lot and also earn some money by doing it.

Once you’ve completed a couple of your ML projects you can try and monetize them, use some freelance platform like upwork or some other way.

If you really have an idea you are passionate about then start a startup!

Or you can apply at some of the best ai companies in the world like Google’s DeepMind, Microsoft Research, OpenAI, FAIR etc.

With the knowledge you gained during this 3 months and after developing a couple of your own projects you should be good to go!

what is my plan?

I will continue to grow in Microsoft environment, finish my master studies with a master project in computer vision (I’ll be looking for some cool project idea) and do my own ML projects as a side project, I will also try to share my experiences here on Medium.

I already started doing a project that involves deep learning for videos, and I am planning to develop a real-time keyword spotter on some embedded system like Raspberry PI, as a way to automate my house for example.

It’s not easy at all to stick with a plan and go all the way, but I promise if you do it you will be grateful.

Here are some research papers I recommend going through if you have time:

  • LeNet-5: LeCun et al., 1998. Gradient-based learning applied to document recognition (sections 2/3)
  • AlexNet: Krizhevsky et al., 2012. ImageNet classification with deep convolutional neural networks
  • VGG-16: Simonyan & Zisserman 2015. Very deep convolutional networks for large-scale image recognition
  • ResNet: He et al., 2015. Deep residual networks for image recognition
  • Inception (GoogLeNet): Szegedy et al. 2014. Going deeper with convolutions
  • Redmon et al., 2015, You Only Look Once. Unified real-time object detection
  • Gatys et al., 2015. A neural algorithm of artistic style
  • Bahdanau et. al., 2014. Neural machine translation by jointly learning to align and translate

Also try using Karpathy’s site he made for making the reading research papers process easier: Arxiv Sanity Preserver

If there is something you would like to hear — write it in the comment section or send me a message, I would be glad to write about ML, software, how to land a job in a big tech company, how to prepare for ML summer camp, electronics etc., anything that could help you.

Also feel free to drop me a message or:

  1. Connect and reach me on LinkedIn
  2. Subscribe to my YouTube channel for ML related content ❤
  3. Follow me on Medium or GitHub

And if you find the content I create useful consider becoming a Patreon!

Much love ❤

Written by

Software/ML engineer @ Microsoft. Founder @ The AI Epiphany. Patreon: YouTube:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store