Interview with Senior Research Scientist at the United States Naval Research Laboratory: Dr. Leslie Smith

Part 23 of The series where I interview my heroes.

Sanyam Bhutani
Mar 12, 2019 · 8 min read

Index and about the series“Interviews with ML Heroes

Today, I’m super excited to be talking to Dr. Leslie Smith.

Dr. Leslie Smith

I’m sure Leslie needs no introduction to our friends from the fast.ai community. For our readers from outside of fast.ai:

Leslie is currently working as a Senior Research Scientist at the Naval Center for Applied Research in AI, United States Naval Research Laboratory.

His past research works include includes deep neural networks and reinforcement learning applied to robotics research. Prior to that, he has worked in the Maritime Surveillance Section.

He has a background in Chemistry, he has done his Ph.D. in the Quantum Chemistry domain.

His Research Objectives are to perform innovative scientific research and algorithm development in the areas of computer vision, machine learning, robotics, and sparse representations.

About the Series:

I have very recently started making some progress with my Self-Taught Machine Learning Journey. But to be honest, it wouldn’t be possible at all without the amazing community online and the great people that have helped me.

In this Series of Blog Posts, I talk with people that have really inspired me and whom I look up to as my role-models.

The motivation behind doing this is that you might see some patterns and hopefully, you’d be able to learn from the amazing people that I have had the chance to learn from.

​ Hello Leslie, Thank you for taking the time to do this.

I am honored that you thought to invite me for this interview

​ You’re currently performing scientific research and algorithm development in the areas of computer vision, machine learning, robotics, and sparse representations.

You’ve been working at the Naval Research Centre for more than 15 years now.

Could you tell the readers about how you got started with Machine Learning? What got you interested in Machine/Deep Learning at first?

For the first several years that I worked at the Naval Research Laboratory, I was working in computer vision and related areas. I took notice that neural networks won the 2012 ImageNet Challenge by a wide margin and Google’s “Cats” paper “Building high-level features using large scale unsupervised learning”. In 2013 I started trying a few things and found I was fascinated by the field. Over the next year, I shifted my focus to deep learning and I think most all the other researchers in computer vision did the same in the following couple of years.

You had a background as a Business founder before switching to research.

What made you pick research as a career path?

Actually, I picked research as a career when I was 10 years old. I was interested in science and a family trip to the New York World’s fair in 1964 was fascinating to me. I picked to focus on Chemistry but in university, I found physics more interesting. So my Ph.D. is in Chemical Physics. Unfortunately, my postdoc at Princeton University didn’t meet my expectation so I left to work in the industry. After 8 years in the industry, I thought I could create my own business. It took me a decade to realize running a business was not right for me. I joke that I couldn’t sell $20 bills for $5. After some soul searching I realized that doing research is what I find most satisfying. That set me on the road to where I am. Now I really love what I do.

Can you tell us more about what you’re currently working on?

Are you secretly building our Robot AI Overlords? :D

Of course. Isn’t everyone! :-)

I am working on lots of ideas, which is part of the fun. Fortunately, I am creative and I have many more ideas than I could possibly have time to work on them all. I joke that it is hard to get everything done in a 40 hour day! But I try. Some of the ideas that I am working on include being able to train networks without weight decay (one less hyper-parameter), online batch selection via importance labeling of data, combining novelty detection with few-shot learning, dynamic data augmentation, automatic data labeling, and label cleaning, and defending against adversarial examples. I can’t work on all of these in a single day so each day I pick one topic to focus on.

You’ve been working as a researcher for quite a few years now. What has been your favorite project during these years?

It is always the next one. I get super excited by my ideas. It gives me the motivation to work hard at it.

For the readers that are curious about what does a day in the life of a researcher look like, can you give us an insight?

How much time do you spend on Experimenting Vs Exploring new ideas?

I don’t know about other researchers but a majority of my time is spent on reading, experimenting, writing, email and talking with people. Reading and experimenting are the catalysts for most of my ideas.

Could you tell us a bit about how do you decide to start a new experiment? What kinds of problems or questions pique your interest?

Well, if I am reading a paper and it leads to an idea, a big factor is if the authors made their code available. If so, I’ll download it and run it to replicate their experiments. Then I can quickly try out my own idea to see if it makes sense. Quick prototyping is important to me here. Also, their code provides a baseline.

Another important factor is my confidence in how likely the idea is to work. If I am confident, I am more motivated to try it than if not.

Once you’ve finally found an idea that you want to explore. What are your go-to techniques? How do you approach a question or idea when getting started?

Start simple. As I said, code on GitHub is a good start and provides a baseline. For example, when I started with few-shot learning, the codes for prototypical networks and MAML were available for a start. Then I try tweaking everything, just to be sure my intuition about how I think it should work matches reality.

A part of Research is to know when to put an end to the experiment or continue experimenting.

For our readers who might get disheartened if their idea isn’t working first or for those who might have been obsessing over some idea for a long while even when things aren’t working, Can you tell us how do you decide between these two?

When an idea is not working out, I must understand why. Is it a mistake I made in the code? Or was I wrong in my thinking. If I was wrong, I learn from this and update my intuition. In this way failed experiments are learning experiences, which are valuable.

For the readers and the beginners who are dreaming of doing research in this domain, what would be your best advice?

Learn from the best and learn from everything. I admire Yoshua Bengio and read most of his papers — and there are lots of them. Also, don’t be afraid to fail or ask stupid questions, Everything is a learning experience. On the other hand, don’t make the same mistake twice. I write everything down in my lab notebook and I review it regularly, I will often take time to quietly think about items to see where my thoughts lead me.

Many people are of the opinion that making significant contributions to the field requires one to be a post-grad or have research experience.

For the readers who want to take up Machine Learning as a Career path, do you feel having research experience is a necessity?

I believe deep learning is transitioning from mostly research to mostly engineering. There is a world of new potential applications that are yet to be created. There are new factors that will become important, such as verification, reproducibility, explainability, integrity, etc. Real world data is substantially different than the research benchmarks like MNIST and ImageNet. Hence research experience is not necessary to do many of these things.

Another opinion that is a “mental barrier” to many is that doing “Machine Learning” or Machine Learning Research requires you to have a cluster of GPU servers and expensive hardware in order to make significant contributions.

What are your thoughts on this opinion?

As I was just saying, data is more important than hardware. Google researchers have access to thousands of GPUs and TPUs but I don’t. It doesn’t prevent me from doing what I can. I’d say my message is “find your own niche”. Do something no one else thought to do.

Given the explosive growth rates in research, How do you stay up to date with the cutting edge?

I spend many, many hours every week keeping up to date but that is just me. On Mondays and Wednesdays, I look through all the new papers on arXiv.org to find ones that might be relevant. This has become a ritual for me. The ones that look relevant, I read the abstract and skim the paper. If it catches my interest, I print it. I spend most of my weekend reading these papers. This takes hours but I stay up to date and reading is a great source of intuition and new ideas.

Do you feel Machine Learning has been overhyped?

Of course it is. I like to paraphrase the quote “All Machine Learning is wrong but sometimes it is useful”. It has proven useful in computer vision, machine translation, and speech recognition, to name a few. I recommend finding new ways to make it useful.

I think the complete fastai community is grateful to you for your research on the “1 cycle learning policy”.

What are your thoughts about the fast.ai course and the community?

I hold Jeremy Howard in high regard. He saw the need for practical deep learning and generously provided a wonderful solution. The community that he created is vibrant and strong.

Before we conclude, any tips for the beginners who are afraid to get started because of the idea that Deep Learning is an advanced field?

Beginnings are always the hardest part. As a scientist, I view life as a series of experiments. Make a deal with yourself to try it for a time that is long enough to know if it is right for you. If it is not, stop and go onto something else. Knowing it is an experiment and not a commitment makes it a lot easier to try things.

Thank you so much for doing this interview.

If you found this interesting and would like to be a part of My Learning Path, you can find me on Twitter here.

If you’re interested in reading about Deep Learning and Computer Vision news, you can check out my newsletter here.

Data Science Network (DSNet)

Machine learning articles, tutorials and paper…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store