Actually build things. I’ll show you where to begin.
A lot of bullshit guides and articles tell you do this course or that certificate, and that’s fine for foundational learning, but the true road to understanding at an elite level is building something cool and new. Building things and sharing them with the public forces you to introduce a structure, rigor, and integrity to your work, and allows you to truly grasp the concepts that are at play.
The purpose of this article is to elucidate areas and resources you can start with to really build novel ml projects, tools, and contributions. If you’d like to gain a mathematically rigorous, fundamental understanding of Machine Learning and Deep Learning, check out this post. People really resonated with it, and asked me to write a guide on how to get started building meaningful projects. If instead you’d like to get up to speed on the math that underlies ML, check this post out.
1. Gain Fluency with modern ML frameworks and tooling.
There are a variety of interesting libraries, frameworks, and deployment environments you can make use of to create your ml projects, and its vital that you have enough fluency with whatever set you choose to be able to focus on the building, not the syntax. A quick way to do this would be to just jump into a popular framework, be it Tensorflow, Pytorch, or plain old Scikit Learn. You’ll pick more up as you learn, but focus on getting up to speed on at least one technology.
Recommendation: A concise and straightforward way to get up to speed with modern ML tooling and architectures would be to take both of the Fast.ai courses. You’ll get up to speed with powerful frameworks and already build some interesting, fundamental models as part of the curriculum.
2. Look at Kaggle Competitions
Kaggle competitions are insanely fun. They allow you to build a model to compete in an interesting problem space. What’s really awesome is the community of other builders and ML engineers. You’ll expose yourself to what the ML community is really like, how they approach their work, and what tools, architectures and heuristics they use. On top of this, you might even get to win some real cash! (It’s hard though, a lot of brilliant folks out there ;)
The great news is that if you’ve done the Fast.ai courses, you’ve already got a taste for Kaggle competitions and have an idea and the toolset to do well! Have confidence, pick a competition, and start hacking! As of right now, there’s interesting competitions ranging from disease diagnosis to NLP.
3. Read and Implement Popular Papers
The best way to get experience with real ml research outside of joining a university or industry lab, is to reimplement the architectures outlined in papers on your own. As you do this, you’ll begin to realize that you’re slowly learning how to modify, and even reason about them. You might even make some cool new architecture and get famous ;) But seriously, reading papers through sites like Arxiv or even Google Scholar is a must, and implementing them as you go will both drastically improve your credibility as an ML engineer, but will equip you to learn and reason about the topic in a way nothing else can. Word to the wise, new papers are being published in ML every day, since its such a burgeoning and fast growing field. Don’t get overwhelmed, stick to the basics, and read a lot of review papers to understand what you like ;)
Recommendation: If you’re looking at Arxiv, use Arxiv sanity. It’s a discovery tool build by a really brilliant guy who worked at Open AI and now runs AI at Tesla. It’ll drastically improve your ability through search through the plethora of papers on Arxiv
4. Optional — Requests for Research
An interesting alternate activity you could pursue would be to complete one of OpenAI’s requests for research. The team over there has outlined certain topics they would like people to look into that they haven’t been able to do it. Some of them a really difficult, and a lot have to do with Reinforcement Learning because of the nature of their work as of late, but if you can really crack one I’m pretty sure they’ll offer you a job. I would!
5. Figure out what subfield in ML you love, and push it forward!
By this point, you’ve done a fair amount of building. You’ve looked at Fast.ai, learned a few ml frameworks, searched through Arxiv and reimplemented some things, and maybe event tackled a request for research. Awesome awesome progress.
Once you get this far, it’s vital to keep going, but something you’ll realize is that there are millions of subfields and topics in ML that each have so much to build and improve. So pick one! I don’t know what it’ll be, but I can give you an list of the fields I’m interested in :)
- Neuroscience Inspired Architectures
- Deep Reinforcement Learning
- Distributed and Federated ML
- ML for Finance
- ML for Healthcare
- Neural Turing Machines
- Neural Architecture Search
- ……way too many other things.
The point is, you’re in a great place. You should apply for jobs, internships, research positions, or just keep building on the side, whatever your passion is! Blog about it, share, and lets collectively push the field of Machine Learning forward!
If you enjoyed it, please let me know by clapping, sharing or commenting! I’m working on some interesting stuff, including brain inspired neural networks that have adaptive topology. I’ll be updating this publication as I go along.