Shifting Gears for AI Saturdays Week 5 in Lancaster

One of the great opportunities in bringing an entirely new program to a community is the first year you have a lot of flexibility to shift gears when you feel things aren’t working at 100% efficiency. In our fifth week, we decided to shift into a different one to move faster on the upcoming terrain.

For our intro level, our white and yellow belts, the Google Machine Learning Crash Course was good at the beginning discussing basic concepts, however, Tensorflow’s setup for a Linear Regressor was simply too much. Over 180 lines of code and the people just beginning on Python through our Datacamp program couldn’t keep up. By the time we got into splitting up test and validation sets, describing input functions, well, let’s just say it was like drinking from a firehose.

So, we took the opportunity to make some adjustments.

First, in preparation for the GMCC Validation Programming Exercise, I instead added a SciKit-Learn version of the Linear Regressor to the planned exercises. I duplicated to the best of my ability the Tensorflow data import and models and showcased how to do the same thing in 50 lines of code instead. People breathed a sigh of relief, and it was a good teaching moment on using the best tool for the job.

Also, we’re stopping with the rest of the GMCC, and moving on to Fast.ai for three weeks. It’s probably a better idea to expose introductory level students to a large spectrum of options and then let them specialize in a particular track, much like we do in Liberal Arts Colleges. So, we’re going into Classification, CNN's, and Computer Vision next before our final projects.

As for the Red and Black Belts, they had an awesome class according to Wes Roberts, covering six topics:

1. We reviewed the transition from linear regression to perceptron to MLP

2. Covered 6 ways to mitigate overfitting including dropout, early stopping, learning rate annealing, and regularization.

3. Dove deeper into regularization and looked at the math of L1 (absolute deviation) and L2 (squared error)

4. Used the Tensorflow playground to experiment with regularization and different activation functions. We talked a lot about what the visualizations mean.

5. Did a quiz to see where everyone was and to fill in any gaps anyone had.

6. Q/A session where we dove into specific examples and connected multiple models together.


So, it looks like we have quite a setup heading into the second half of the program. We’re looking forward to taking the next steps and moving deeper into AI!