ChiPy Mentorship pt 2

Hi all! Thank you for coming back to read part two of my ChiPy mentorship blog! You can check out my last post here.

Computer Vision + Machine Learning + Control Systems= ???

You might think that as a junior, my biggest problems in high school are grades, SAT, or even looking for the right college. But my problem goes much farther and deeper than that. I have been tasked with the age old question: How do I implement Machine Learning into my Computer Vision application?

Meeting with Nick over the last couple weeks has been great and we discussed the multitude of ways that I could enhance my current system.

  1. Use ML to find targets in the images
  2. Use ML to create an algorithm that converts 2D camera frames into 3D way points the robot will travel along
  3. Use ML to control the robots exact movements in a control loop

After extensive research and discussion into the question, I opted for choice #2 for a few reasons. Option number 1 was not really essential to my team and using pre-programmed ranges and OpenCV was very efficient for finding the targets. A solution for Option 3 already exists in Pathfinder. Also Option 2 seemed the coolest to be honest… :)

First Attempts:

So to start the project I decided to look into how machine learning actually works and to experiment with some code. Using the sklearn library and Jupyter Notebook was very useful and made setup of my environment quite easy. My project requires the use of a Regressor as opposed to a Classifier because I am looking for continuous output and not discrete. The main reason for this is because the robots movements can be in a variety of directions and may even rotate while moving so continuous output will account for this.

You can check out my code below in these two Jupyter Notebooks. Basically I auto generate a set of (almost) random data and then use the SGD and NN regressors to determine a fit for the data. This model should be able to “accurately” calculate a projected direction the robot needs to travel based on a couple dimensions from an image. As the data is all fabricated, the results did not come out perfectly and I hope to address this by collecting actual data.

Data Collection:

After weeks of unsuccessfully trying to get proper data from my teams robot due to time constraints and other conflicts, I decided to try and recreate my robots environment virtually. I realized that if I got this section working, I could get thousands and thousands of data points all without lifting a finger… (or 110 lb robot)

Currently, I have a program that creates a 3D world in which the rectangle (targets) are positioned. This world is then projected upon a 2D screen where the camera is modeled. By moving these rectangles around in the 3D world, I can then view where they would appear on a 2D screen. This models exactly how the robot would move around during competition in order to reach these targets.

Next Steps

  1. Finish creating 3D environment
  2. Map 3D world points to 2D camera view
  3. Train a Regression model that will take any 2D points and turn them into (3D) way points that the robot can follow
  4. Integrate with current robot code

Hope you liked this blog post! Check out my Git where all the code is stored (and eventually documented :’) )

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.