Interviews with Machine Learning Heroes

Meta Article with links to all the interviews with my Machine Learning Heroes: Practitioners, Researchers and Kagglers.

Sanyam Bhutani
Data Science Network (DSNet)
12 min readJul 17, 2019

--

During the past few months, I’ve spent time interviewing great practitioners, researchers and kagglers.

This post serves as an index or Meta-Article with a link to all of the interviews that I’ve conducted. I’ll keep updating this post as the interviews happen.

Photo by Michael Browning on Unsplash

About the Series:

I have very recently started making some progress with my Self-Taught Machine Learning Journey. But to be honest, it wouldn’t be possible at all without the amazing community online and the great people that have helped me.

In this Series of Blog Posts, I talk with People that have really inspired me and whom I look up to as my role-models.

The motivation behind doing this is, you might see some patterns and hopefully you’d be able to learn from the amazing people that I have had the chance of learning from.

Interviews in Chronological order (with selected best advice):

Interview with DL Practitioner: Dominic Monn

Don’t be afraid to get your name out and your hands dirty as early as possible — both in business and Machine Learning. Ship fast and get internships, be determined and the rest will come by itself.

Interview with Deep Learning Freelancer Tuatini Godard

Don’t give up, data science is a multidisciplinary field and it’s hard to be good on every front. Choose a specialization at first and put all your energy on it. For instance I chose deep learning for computer vision instead of say, time series or NLP. Don’t get fooled by the AI hype like me when I started 2 years ago, there is no AI (or AGI, call it the way you want), if you choose the computer vision path, then you become a computer vision expert, for NLP you become a NLP expert. Deep learning only helps to solve a part of the big picture, it’s not the big picture itself, only a tool to add to your toolbelt. As an freelancer or an “expert” (call it the way you want) you will be asked to do more than just playing with deep learning models most of the time. Learn about data preparation, software engineering skills, some devops and co, basically the whole stack to create and ship a deep learning project in production, not just the fun part. But most importantly, learn to learn and to adapt, keep an eye on the latest trends and always stay up to date on what is going on as things moves really fast in the field of data science.

Interview with Kaggle Kernels Expert: Aakash Nain:

At some point in time, you would also feel low seeing the amount of research going on. It is okay if you feel that way. There are only two things that we should always remember:

1) Your success isn’t dependent on others and vice-versa.

2) Never be shy/afraid to ask even the most basic things. If you don’t ask, you are the one who is hindering your progress.

Interview with Twice Kaggle Grandmaster: Dr. Jean-Francois Puget (CPMP)

Read write ups of top teams after each competition ends. It is better if you entered the competition, as you will get more from the write up, but reading all write ups is useful IMHO. That’s when you learn the most on Kaggle.

Interview with Kaggle Competitions Grandmaster: KazAnova (Rank #3): Dr. Marios Michailidis

Understand the problem and the metric we are tested on- this is key.

Create a reliable cross-validation process that best would resemble the leaderboard or the test set in general as this will allow me to explore many different algorithms and approaches, knowing the impact they could yield.

Understand the importance of different algorithmic families, to see when and where to maximize the intensity (is it a linear or non-linear type of problem?)

Try many different approaches/techniques on the given problem and seize it from all possible angles in terms of algorithms ‘selection, hyper parameter optimization, feature engineering, missing values’ treatment- I treat all these elements as hyperparameters of the final solution.

Interview with Deep Learning and NLP Researcher: Sebastian Ruder

I’ve had the best experience writing a blog when I started out writing it for myself to understand a particular topic better. If you ever find yourself having to put in a lot of work to build intuition or do a lot of research to grasp a subject, consider writing a post about it so you can accelerate everyone else’s learning in the future. In research papers, there’s usually not enough space to properly contextualize a work, highlight motivations, and intuitions, etc. Blog posts are a great way to make technical content more accessible and approachable.

The great thing about a blog is that it doesn’t need to be perfect. You can use it to improve your communication skills as well as obtain feedback on your ideas and things you might have missed.

Interview with Deep Learning Researcher and The GANfather: Dr. Ian Goodfellow

Start by learning the basics really well: programming, debugging, linear algebra, probability. Most advanced research projects require you to be excellent at the basics much more than they require you to know something extremely advanced. For example, today I am working on debugging a memory leak that is preventing me from running one of my experiments, and I am working on speeding up the unit tests for a software library so that we can try out more research ideas faster. When I was an undergrad and early PhD student I used to ask Andrew Ng for advice a lot and he always told me to work on thorough mastery of these basics. I thought that was really boring and had been hoping he’d tell me to learn about hyperreal numbers or something like that, but now several years in I think that advice was definitely correct.

Interview with OpenAI Fellow: Christine McLeavey Payne

In terms of specifics — one of my favorite pieces of advice was from FastAI’s Jeremy Howard. He told me to focus on making one really great project that showed off my skills. He says that so many people have many interesting but half-finished projects on github, and that it’s much better to have a single one that is really polished.

The other thing I’d add is that I’ve learned the most about deep learning when I’ve taught it to others. I became a mentor for Andrew Ng’s Coursera courses, and trying to answer other students’ questions pushed me to understand the material at a much deeper level. Whenever you learn a new topic, try explaining it to someone else (this’ll also be great preparation for job interviews!).

Interview with The Youngest Kaggle Grandmaster: Mikel Bober-Irizar (anokas)

if you’re just starting out it’s useful to take a look at the playground competitions or previous competitions from the last few years. Kaggle Kernels are probably the best resource for learning, as people share tons of analysis and solutions to all the competitions. The most important thing is just to experiment for yourself and try to improve the score!

Interview with Deep Learning Researcher and Leader of OpenMined: Andrew Trask

The secret to getting into the deep learning community is high quality blogging. Read 5 different blog posts about the same subject and then try to synthesize your own view. Don’t just write something ok, either — take 3 or 4 full days on a post and try to make it as short and simple (yet complete) as possible. Re-write it multiple times. As you write and re-write and think about the topic you’re trying to teach, you will come to understand it. Furthermore, other people will come to understand it as well (and they’ll understand that you understand it, which helps you get a job). Most folks want someone to hold their hand through the process, but the best way to learn is to write a blogpost that holds someone else’s hand, step by step (with toy code examples!). Also, when you do code examples, don’t write some big object-oriented mess. Fewer lines the better. Make it a script. Only use numpy. Teach what each line does. Rinse and repeat. When you feel comfortable enough you’ll then be able to do this with recently published papers — which is when you really know you’re making progress!

Interview with The Creator of Keras, AI Researcher: François Chollet

I don’t think you should tie your dreams to external markers of status, like working for a specific big-name company, or making a specific amount of money, or attaining a specific job title. Figure out what you value in life, and then stay true to your values. You’ll never have to regret a single decision.

Interview with the Author of PyImageSearch and Computer Vision Practitioner: Dr. Adrian Rosebrock

One of the worst ways to start writing anything, whether technical or not, is to open a new document and expect that you’ll have words pouring out of you, magically filling up the page — it rarely works like that and it’s normally a recipe for frustration and failure.

Instead, I recommend outlining first. I personally outline in bullet points.

Start with your headers first — what are you going to cover? And in what order?

From there you back and start filling in the sub-headers. What do you need to cover in each section?

Finally, I go back and include the details for each section. Sometimes I even write in complete sentences, but that’s a personal choice.

I always leave notes to myself such as “Insert an image here” or “Lines X-Y of some_script.py go here”. I try to keep myself focused on the actual writing process as much as I can. The actual images and code can be inserted later during typesetting.

Interview with Kaggle GrandMaster, Data Scientist: Dr. Bojan Tunguz

Don’t be afraid to fail and try to learn from your mistakes. Read the discussions in the forums, take a look at the best kernels, and try to improve upon them.

Interview with Deep Learning Researcher at fast.ai: Sylvain Gugger

It’s not that the theory is useless, and it’s going to be a critical piece to help a beginner forge its intuition, but starting with training an actual model and deploying it before explaining what is behind it makes a lot of sense.

Interview with the Chief Scientist at Salesforce: Dr. Richard Socher

Differentiate between unfounded fears and the actual threats. AI bias is an imminent fear that needs to be addressed, whereasdoomsday scenarios are a dangerous distraction. People need to start paying close attention to making sure the data sets that AI is trained on is diverse and free of bias, and that the teams building these systems also represent a diverse set of perspectives.

Interview with Data Scientist at kaggle: Dr. Rachael Tatman

My universal advice is to not get a Ph.D. I even wrote a blog post about it a while ago. The blog’s about linguistics specifically, but most of it applies to machine learning as well. I think that having a Ph.D. can be an advantage when you’re looking for data science jobs, but unless you really want to 1) do research or 2) be a professor there’s no really no benefit to getting a Ph.D. that you can can’t get more quickly doing something else.

I think that Kaggle, or other practical experience, will get you to the point where you can apply for jobs much more quickly. I probably wouldn’t recommend only doing Kaggle competitions, though. You’ll learn a lot about algorithms that way, but you won’t get as much practice with things like cleaning data or designing metrics. That’s part of the reason I suggest that people work on their own projects as well. That shows off your ability to come up with interesting questions, source and annotate data, clean your data and think about what users want.

Interview with Deep Learning freelance consultant and Blockchain dev: Mamy André-Ratsimbazafy

I think most recruiters and companies are not mature enough to evaluate candidates. Many are still building up their teams and don’t have the in-house expertise to look for external signs of competence for recruiting. I’m much more concerned about the experience requirements. The rise of deep learning was in 2012 with AlexNet. Besides the US, I would guess that many data science focused masters were created around 2014 in universities so most new hires with an actual data science degree would have at most about 3 years of experience. Most more experienced people would probably be self-taught.

Interview with Twice Kaggle GrandMaster and Data Scientist at H2O.ai: Sudalai Rajkumar

Read multiple good kernels and try to understand them in detail. Learn how they create insights from the data, the plots they have used to portray the data, the inferences that they have come up with. It is also a good idea to take up a new concept (like a new algo or a novel technique) and educate people about the same. I personally do not like the kernels which just blends the output of two or three other kernels and get a high score.

Interview with Radiologist, fast.ai fellow and Kaggle expert: Dr. Alexandre Cadrin-Chenevert

Participating in Kaggle competitions was of paramount importance in my learning curve of deep learning. Maturation of almost any learning process is based on the transformation of knowledge to skills, or concept to actions. Each competition is an opportunity to evolve in this direction. But, naturally, the tradeoff is the time you invest in these competitions. So my own perspective was to participate in a limited number of computer vision competitions selected to catch efficiently most of the potential benefit.

Interview with the Creator of DeOldify, fast.ai student: Jason Antic

Yes, you definitely have to know when to quit, and that’s quite the art. I say “No” to, and/or quit, a lot of things actually. Why? Because for everything you say “Yes” to, you’re saying “No” to many other things. Time (and sanity) is precious. As a result, especially lately, I’ve said “No” to quite a few opportunities that others find crazy to say “No” to.

So quitting for me is decided first on whether or not the path falls squarely in my realm of values, interests, and maintaining sanity. If you’re talking about an ambitious technological project, then you have to go one step further and evaluate whether or not it’s actually feasible. Often you simply can’t answer that right away, especially if you’re doing an ambitious project! So that’s why you experiment. If you discover a sound reason (and not just a rage-quit rationalization) as to why something won’t work, then quit, learn from it, and move on! But be careful on this point — a lot of problems are solved handily simply by shifting perspective a bit. For this reason, I’ll stick with a problem a bit longer than seems reasonable, because often my perspective is simply wrong. Often the solution comes when I walk away (shower thoughts, basically).

Interview with Kaggle Grandmaster, Head of Computer Vision at X5 Retail Group: Artur Kuzin

Kaggle allows you to very quickly develop a specific skill set. With the right approach, these skills can be converted into the necessary quality for work (https://habr.com/ru/company/ods/blog/439812/ ). Also, competitions allow you to try a lot of different tasks and greatly expand your knowledge. Finally, it is insanely fun if you are able to find such a friendly community like ODS.ai.

Interview with Senior Research Scientist at the United States Naval Research Laboratory: Dr. Leslie Smith

Well, if I am reading a paper and it leads to an idea, a big factor is if the authors made their code available. If so, I’ll download it and run it to replicate their experiments. Then I can quickly try out my own idea to see if it makes sense. Quick prototyping is important to me here. Also, their code provides a baseline.

Interview with Kaggle Grandmaster, Senior Computer Vision Engineer at Lyft: Dr. Vladimir I. Iglovikov

I really hope that more and more Kagglers will invest in writing. It can be blog posts, papers, or something else. More information flowing between people is better. Writing a blog post or tech report will help you to structure thoughts on the problems, it will help other people to have a warm start on similar challenges and, most likely, it will help you with career opportunities. Which means that you will work on more exciting problems and being better paid at the same time.

Interview with Kaggle Grandmaster, Data Scientist at Point API (NLP startup): Pavel Pleskov

So, my advice to everyone: don’t waste precious time and begin practicing as soon as possible — you will close the gaps in theory later on.

If you found these interviews interesting and would like to be a part of My Learning Path, you can find me on Twitter here.

If you’re interested in reading about Deep Learning and Computer Vision news, you can checkout my newsletter here.

--

--

Sanyam Bhutani
Data Science Network (DSNet)

Machine Learning Engineer and AI Content Creator at H2O.ai, Fast.ai Fellow, Kaggle x3 Expert (Ranked in Top 1%), Twitter: https://twitter.com/bhutanisanyam1