Measuring the quality of NYC Bike Lanes through street imagery.

Over the past few months, I have been working with ARGO on SQUID-Bike as part of my capstone project towards measuring citywide bike lane quality using street imagery collected using the Open Street Cam (OSC) app. In an earlier post, I explain how we collect street imagery data using OSC.

In this post, I present a few image processing and computer vision techniques that my team and I have been working on.

We have also experimented with Microsoft’s Custom Vision product to quickly produce classifications based on simply labelling bike lane imagery.


The Python programming language has a number of libraries that can implement powerful computer vision and image processing techniques — openCV and SciPy are the more widely known ones that I use in this post.

Prof. Greg Dobler is the image processing expert at NYU CUSP and we have consulted with him in helping us build these algorithms.

We are trying to measure bike lane quality by combining four approaches and borrowing from NACTO’s “Urban Street Design Guide” :

  • Ride Quality as measured using the accelerometer data from OpenStreetCam, the app that collects our data.
  • Color of the Bike lane — Green lanes are higher in quality to un-painted bike lanes as they are more visible to bike riders.
  • Symbols and pavement markings — Well marked bike lanes are higher in quality to poorly marked or unmarked bike lanes.
  • Visible street defects, such as cracks and potholes. (the fewer, the better!)

The following focuses on measuring lane markings and visible defects. This is a naive scoring system and a first iteration of a process that is similar to many computer vision applications.

Detecting Lane Markings

  1. Reading the original image using SciPy
import scipy.ndimage as nd
img = nd.imread(photoName)
return img

2. Cropping the image to the lower half (to remove non-bike lane features), and clip the bottom 10% of the picture as well (to remove my bike tire)

#number of rows and number of col
nrow, ncol = img.shape[:2]
#clipping from half the picture to 90% of the picture removes
#the street ahead and also the bike tire
img = img[nrow//2:(nrow-nrow//10),:,:]
return img

3. Filtering the cropped image to only white pixels using a color threshold.

#separate 3 color layers
red, grn, blu = img.transpose(2, 0, 1)
#create and apply a threshold to get white colors
thrs = 200
wind = (red > thrs) & (grn > thrs) & (blu > thrs)
return wind

4. Blurring the cropped+filtered image using gaussian blur, with a bandwidth value of 40 pixels

#apply gaussian filter to white indexes 
gf = nd.filters.gaussian_filter
blurPhoto = gf(1.0 * wind, 40)
return blurPhoto

5. Binarizing the cropped + filtered + blurred image i.e. setting the pixels that are lighter than blurred image all the way to 1, set all other pixels to 0.

#the blurred image bigger than some threshold
#the threshold is the gray area that
#separates the white from the black
threshold = 0.16
wreg = blurPhoto > threshold
return wreg

The final lane marking score of this image (cropped + filtered + blurring + binarizing the original image) is counted as a percentage value of white pixels aka lane markings.

In this case, 13.2% of the picture is filled by lane markings.


Detecting visible street defects

  1. Reading
import scipy.ndimage as nd
img = nd.imread(photoName)
return img

2. Cropping

#number of rows and number of col
nrow, ncol = img.shape[:2]
#clipping from half the picture to 90% of the picture removes
#the street ahead and also the bike tire
img = img[nrow//2:(nrow-nrow//10),:,:]
return img

3. Filtering — In this case, a median filter is applied to preserve edges!

md = nd.filters.median_filter
#blur photo
md_blurPhoto = md(img, 5)
return md_blurPhoto

4. Converting the image from RGB to HSV and filter image to preserve only darker (defect) pixels

import cv2
lower = np.array([0, 10, 50])
upper = np.array([360, 100, 100])
hls = cv2.cvtColor(md_blurPhoto, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hls, lower, upper)
res = cv2.bitwise_and(hls, hls, mask = mask)
return res

5. Edge detection using the canny edge detection.

We have also applied a 3x3 gaussian filter over image and then perform erosion / dilatation operations on remaining white pixels to remove noise.

edges_cv = cv2.Canny(res, 200, 400)
#blur edges
blurred_edges = cv2.GaussianBlur(edges_cv,(3,3),0)
#only want to keep cracks that are near other cracks or that have a #minimum threshold
bdilation = nd.morphology.binary_dilation
berosion = nd.morphology.binary_erosion
edges_2 = bdilation(berosion(blurred_edges, iterations=2), iterations=2)
defect_score = edges_2.mean()

The final defect score of 6.95% is the percentage of white pixels in the image that remain after the process above.

Again, these techniques are only a first iteration and limited by the variations in light or camera angles. They are adequate in being able to measure the quality of a number of bike lanes we’ve seen so far in NYC.

So far, the team has ridden close to 50 miles and collected 4500 individual images of streets.

The next steps could include building a convolutional neural network (CNN) and run each of our (labelled) images through this network, but this takes time and a little more expertise than I currently have. We would love to hear feedback, comments or other ideas to do this!

Thanks!

Like what you read? Give Geoff P a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.