Udacity Students on Computer Vision, Tiny Neural Networks, and Careers

Finding the right parameters for your Computer Vision algorithm

maunesh

In this post maunesh discusses the challenges of tuning parameters in computer vision algorithms, specifically using the OpenCV library. maunesh built a GUI for parameter tuning, to help him develop intuition for the effect of each parameter. He published the GUI to GitHub so other students can use it, too!

For Canny edge detection algorithm to work well, we need to tune 3 main parameters — kernel size of the Gaussian Filter, the upper bound and the lower bound for Hysteresis Thresholding. More info on this can be found here. Using a GUI tool, I am trying to determine the best values of these parameters that I should use for my input.

Behavioral Cloning For Self Driving Cars

Mojtaba Valipour

In this post, Mojtaba walks through the development of his behavioral cloning model in detail. I particularly like the graphs he built to visualize the data set and figure out which approaches would be most promising for data augmentation.

Always the first step to train a model for an specific dataset is visualizing the dataset itself. There are many visualization techniques that can be used but I chose the most straightforward option here.

Building a lane detection system using Python 3 and OpenCV

Galen Ballew

Galen explains his image processing pipeline for the first project of the program — Finding Lane Lines — really clearly. In particular, he has a admirably practical explanation of Hough space.

Pixels are considered points in XY space
hough_lines() transforms these points into lines inside of Hough space
Wherever these lines intersect, there is a point of intersection in Hough space
The point of intersection corresponds to a line in XY space

What kind of background do you need to get into Machine Learning?

Chase Schwalbach

This is a great post for anybody interested in learning about self-driving cars, but concerned they might not be up to the challenge.

I’ll put the summary right up top — if I can do it, you can too. I wanted to share this post to show some of the work I’m doing with Udacity’s Self-Driving Car Nanodegree, and I also want to share some of my back story to show you that if I can do it, there’s nothing stopping you. The only thing that got me to this point is consistent, sustained effort.

Self-driving car in a simulator with a tiny neural network

Mengxi Wu

Mengxi wasn’t satisfied training a convolutional neural network that successfully learns end-to-end driving in the Udacity simulator. He systematically removed layers from his network and pre-processed the images until he was able to drive the simulated car with a tiny network of only 63 parameters!

I tried the gray scale converted directly from RGB, but the car has some problem at the first turn after the bridge. In that turn, a large portion of the road has no curb and the car goes straight through that opening to the dirt. This behavior seems to related to that fact that the road is almost indistinguishable from the dirt in grayscale. I then look into other color space, and find that the road and the dirt can be separated more clearly in the S channel of the HSV color space.