FSDL Bootcamp 2018 — Day 3

Gerard Simons
9 min readOct 10, 2018

--

Hello world, and welcome to my third and final post describing my experiences attending the Full Stack Deep Learning Bootcamp at UC Berkeley in the summer of 2018. Find a quick overview of the entire blog post here, and the previous blog post here.

I will describe each lecture / lab session and include a small takeaways section that describes the most important things I learned from said lecture or lab.

Lecture 9 — Research Directions

All this very practical full stack stuff is great and truly is what makes this bootcamp what it is, but some you just want to be a little bit more out there. This lecture gave us just that: a high level overview of future research directions. It discussed some of the more exciting new fields, such as

  • Few-shot learning & Meta-learning
  • Reinforcement Learning
  • Imitation Learning
  • Domain Randomization
  • Architecture Search
  • Unsupervised Learning
  • Lifelong Learning

I highlighted a few that I will discuss in somewhat more detail here. Pieter Abbeel who gave this lecture, I know mostly from his work on Reinforcement Learning. As we use a lot of these techniques at Captain AI, I was happy to see it make an appearance.

It discussed some of the basics of RL as well as some of the more advanced topics, such as Meta RL (SNAIL and MAML). This is very interesting research, but which I found difficult to easily incorporate in an existing RL setup. I think these may need some time to enter into the industry.

It also talked about imitation learning, that can be used to learn from a human expert to perform a task. These can be interesting as they are usually much more data efficient than starting from scratch.

Another really cool topic that relates to using RL in a simulator is domain randomization (sometimes called domain confusion). Basically the idea here is that it is often extremely difficult to very accurately simulate something. It is however often very feasible to simply randomize your environment, so that your AI becomes resillient to this kind of variance. In effect making the AI think that reality is simply another configuration of the environment’s parameters.

Architecture search dealt with having an AI also come up with the networks them self. Pretty interesting and strange networks were developed, and a logical step for AI to take, putting the emphasis of ML even more into the data collection and prepping rather than the development of models. The unsupervised learning topic discussed some interesting generative adverserial network (GAN) research. Life-long learning was about how to enable models to adapt to a changing world when deployed (data distribution shift).

Another nice ending of the lecture was on how to digest the research. Given the staggering amount of publications these days in ML, it’s hard to to allocate your own mental resources (perhaps an AI could decide?) to all this new material. There was a run down of the entire process, but here’s a short summary: don’t read the entire paper. Go through abstracts, figures and plots first. Try to find videos on the work (I personally recommend 2 minute papers) and only drill deeper into the paper when you think it will be valuable to you.

Takeaways : Loved the RL recap even when I knew most of the stuff already. Anyone interested in RL will really like this talk. In general it was cool to see the latest research across the board. And the how to read the paper section was a really nice bonus that I will be sure to check out more often. These kind of talks are always good to get you excited even more about the future.

Lab 6 & 7 — Deployment

I suppose this one goes all the way on the top of the fine stack we have so carefully constructed in the past days : deployment. As machine learning matures, developing methods of productionizing (what a sweet tech word) ML experiments becomes crucial.

In these consecutive labs we implemented some tests, similar to those you would see in traditional software development. These tests would do some inference prediction tasks to make sure it’s passing these tests based on some accuracy and performance metrics.

We would then learn how to connect our GitHub repo to CircleCI, a continuous integration platform. This platform will run tests at every commit ensuring new commits do not break previous results. Since you’d be using a software versioning system like Git, you could always revert if some changes would break the integration cycle. If you’re data would be similarly versioned, you could do the same there. It’s important to realise this distinction with software. The issue could also lie with your data!

The next step was using the awesome Docker tool to containerize your classifier. Usually you would wrap your classifier in a simple web app. They use Flask here, which is a light-weight Python web server. It’s not really meant to be used in a professional setting, but for a simple exercise as this it should be fine. The web app is then wrapped in a Docker container, which is then built into an image. Using a service such as AWS Lambda (or for example Google Functions for Google Cloud Platform) we could then run several of these containers. This is what is called a server-less architecture and is very popular these days, as it’s easy to scale up and down w.r.t your traffic or business needs.

Find the final solution here.

Lecture 11— Jobs

Jobs, jobs, jobs. Luckily I already have a job I quite like. And we might have one for you too (contact me if you like RL / ML / coding!). This lecture helps aspiring ML professionals find a decent job. Considering the bootcamp was located close to Silicon Valley, I couldn’t imagine a better place to find a job. People are scarce, demand is huge. Still, tech interviews are known to be brutal at times. This talk could help you prepare for any such interview.

First, it laid out the different roles there are (DevOps-, Data- and ML engineers, ML Researcher, and Data Scientist) and how different roles make up a good team. It was interesting to note that there is no clear consensus yet on what makes for a perfect ML team. Most do believe that there should be a mix of software engineering and ML skills. There is however some dispute on the ML researcher who are often hard to integrate in conventional tech teams. Also some think date engineering should be its own team, whereas others feel you should sprinkle those guys among existing teams. I found this very interesting myself as we are looking at building our very first ML team at Captain AI.

After some helpful graphs that can help you decide which role is right for you, there were some useful pointers on nailing the interview. It ranged from topics such as problem setup (what data to look for, what model to use) to algorithm knowledge (what is an LSTM?), to ML theory and debugging methods.

Takeaways: Great overview of the different ML roles and what to expect when you want to apply for some jobs.

Guest Lecture 3— Yangqing Jia

TensorFlow, Keras, PyToch, Keras inside of TensorFlow. What’s with all the frameworks, and does it even matter? Yangqing was the perfect guy to make sense of all the different frameworks and tools as he is the creator of the original Caffe. Did a lot of work on TensorFlow at Google, helped develop ONNX the Open Neural Network eXchange format and finally is working at Facebook to develop PyTorch which has been getting a lot of attention as a nice easy-to-use alternative to the dominance of TensorFlow.

He explained the differences quite well between the different frameworks in terms of developer efficiency (ease of debugging, simplicity, intuitiveness and interactive development) and infrastructure efficiency (implementation, scalability, model definitions, cross platform capabilities). Choosing the right framework means choosing one that finds the right balance between the two for your specific project.

Frameworks and tools of the trade

It went on to compare major differences between computation graph frameworks like TensorFlow that are not always intuitive to understand and to design and maintain but are easy to optimize and serialize for production. Imperative toolkits that has no separate execution engine are often easier to design and maintain but are more difficult to optimize and deploy across multiple platforms.

He then showed the decisions made at Facebook regarding framework choices. They choose for a PyTorchONNXCaffe2 combination. Using PyTorch to easily design and experiment neural network models with. When the experiment was deemed succesful they exported the models to ONNX format and imported it with Caffe2 which would have greater infra efficiency.

At the end of the talk, there were a few more tips across the whole stack: Please don’t use AlexNet anymore. It was revolutionary at the time, but there are much better models now. Unification helps everybody, referring to the attention people should give to formats such as ONNX that will allow everyone to freely exchange models. Another good one was to invest in experiment management, as it is very important but very hard to quantify and track these things without good tools.

Takeaways: Great overview of the different frameworks. Refreshing to see a different take on it from someone who really helped build many of the existing tools so far, other than to say to use TensorFlow or PyTorch for very arbitrary reasons. It’s interesting to see that this ONNX format is really taking off and actually used by Facebook. It’s great that it’s apparently very much possible to do experimentation in a different framework than infrastructure, although I guess you would have to have double the knowledge too.

Guest Lecture 4— Lukas Biewald

The final lecture of the bootcamp. A few people had already left, the rest was looking a little exhausted, me included. Thankfully this lecture would turn out to be great for getting you back in your seat, and occasionally falling of it entirely from laughing too hard.

Lukas Biewald, co-founder of Crowd Flower, now known as Figure Eight, a huge data labeling company also created Weights & Biases, the experiment management system we have been using throughout the later labs.

Not very technical, but definitely interesting. It ranged from a variety of topics like Kaggle, autonomous cars and why it’s not here yet. It also discussed some topics like data drift which came up before during the bootcamp (seems to be a really big problem in business!) where models degrade because they were trained on a different dataset than are now common.

Another part of the talk was about the role of the human in the loop, this refers much to his work at Figure Eight. How you can most effectively use human expert knowledge and make sure you’re data is clean.

The ugly truth about the sexiest job in the world?

Takeaways: It’s hard to summarize this talk as it is filled with cool examples, thoughts and learnings. Definitely worth checking out this guy and some of his talks if you enjoy getting a whole range of nuggets of knowledge while also being entertained.

Conclusion

Well that was it. Those were three very intense but interesting days. After the last lecture we had a few more beers and everything drew to a close. Fortunately I had some time for R&R and enjoying California! Hope this was useful for you, and that you get the chance to attend such a course yourself in the near future.

--

--

Gerard Simons

Data enthusiast, regular old computer scientist at heart. Publications in Computer Graphics and Data Visualisation.