M2M Day 184: How to hack your way through someone else’s code. When you have no clue what you’re doing.

Max Deutsch
5 min readMay 4, 2017

--

This post is part of Month to Master, a 12-month accelerated learning project. For May, my goal is to build the software part of a self-driving car.

Yesterday, I introduced the primary method I use for learning new technical skills, which I call the V-Method. Using this method, I start my studies with a highly-specific example (that should closely simulate my desired end result), and use this as an entry point to learn the relevant underlying concepts in a tangible, organized, and hierarchical way.

Therefore, today, my goal was to get some code running on my computer that I may be able to use for my self-driving car.

Finding some code

After Googling around a little bit, I found a project on Github that suited my needs well. The code takes an input image of the road, and attempts to identify where the lane lines are.

So, from this…

To this…

My goal for today was to try to replicate this result with the code running on my own computer.

Getting set up

Before I could run any code, I needed to make sure my computer was set up with the appropriate software libraries. In particular, I needed to install the numpy, matplotlib, and OpenCV libraries for Python.

After getting oriented in Terminal (the command line on Mac) and finding some instructions online, I ran into my first error…

Rather than trying to figure out exactly what this error means or how to fix it myself, I used the most effective debugging technique I know: I copied and pasted the entire error message into Google.

I clicked on the third link and found this answer:

After running these few commands (by copying and pasting them into Terminal and clicking “Enter”), everything seemed to work properly.

I was officially all set up (at least for now).

Running the code

Now that I was set up, it was time to run the code. After using Google again to augment my limited Terminal knowledge, I got the code to run, and nothing seemed to break.

I got this output…

Cool! So, these numbers are essentially the mathematical representation of the two lane lines.

So far, so good. But, where are the visuals?

In the Github project I was trying to replicate, the code also outputted these nice plots…

As well as the image with the red overlays…

Sadly, my code wasn’t outputting either of these, nor was it saving any images to my local directory.

So, once again, I turned back to Google, and searched “save image python”, in hopes of figuring out how to save an image of the output.

Google nicely told me to use the function cv2.imwrite(), so I did, and it worked. And by “worked”, I mean… I was able to save a gray scale image of the photo with the lane lines visualized in white.

And here’s another…

And one more…

Now, what?

This is a good start.

Basically, once I can identify the lane lines effectively, I can use this info to teach my self-driving car how to steer (within the lane lines). Also, since video is just a collection of many photos, processing video should work in the same way (as long as I can figure out how to break apart a video into photos in real-time for processing).

Tomorrow, since the code is more or less working, I will try to go through the project line-by-line and start uncovering how it actually works.

Until then, the lesson is this: If you are willing to accept that you often don’t have all the answers, but are willing to Google around and experiment a little bit, you can make progress anyway.

Sure, I don’t have a strong conceptual understanding yet, but I now have a functional example that I can use as my starting point.

Read the next post. Read the previous post.

Max Deutsch is an obsessive learner, product builder, guinea pig for Month to Master, and founder at Openmind.

If you want to follow along with Max’s year-long accelerated learning project, make sure to follow this Medium account.

--

--