This post was written by Aidan Abdulali.

In this post, we explore the deep connection between ordinary differential equations and residual networks, leading to a new deep learning component, the Neural ODE. We explain the math that unlocks the training of this component and illustrate some of the results. From a bird’s eye perspective, one of the exciting parts of the Neural ODEs architecture by Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud is the connection to physics. ODEs are often used to describe the time derivatives of a physical situation, referred to as the dynamics. Knowing the dynamics allows us to model the change of an environment, like a physics simulation, unlocking the ability to take any starting condition and model how it will change. With Neural ODEs, we don’t define explicit ODEs to document the dynamics, but learn them via ML. This approach removes the issue of hand modeling hard to interpret data. Ignoring interpretability is an issue, but we can think of many situations in which it is more important to have a strong model of what will happen in the future than to oversimplify by modeling only the variables we know. …


26 Dec 2017 | Shannon Shih and Pravin Ravishanker

Trees are great. They provide food, air, shade, and all the other good stuff we enjoy in life. Decision trees, however, are even cooler. True to their name, decision trees allow us to figure out what to do with all the great data we have in life.

Like it or not, you have been working with decision trees your entire life. When you say, “If it’s raining, I will bring an umbrella,” you’ve just constructed a simple decision tree.

Image for post
Image for post

It’s a pretty small tree, and doesn’t account for all situations. Likewise, this simplistic decision making process won’t work very well in the real world. What if it’s windy? “If it’s raining and isn’t too windy,” you’ll say. “I will bring an umbrella. …


13 Jul 2017 | Daniel Geng and Shannon Shih

Here’s a riddle:

Image for post
Image for post

So what does this have to do with machine learning? Well, it turns out that machine learning algorithms are not that much different from our friend Doge: they often run the risk of over-extrapolating or over-interpolating from the data that they are trained on.

There is a very delicate balancing act when machine learning algorithms try to predict things. On the one hand, we want our algorithm to model the training data very closely, otherwise we’ll miss relevant features and interesting trends. …

About

Machine Learning @ Berkeley

A student-run organization at UC Berkeley working on ML applications in industry, academic research, and making ML education more accessible to all

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store