Queen City Hackathon — Developing an Interactive Shiny Application to Benefit the City of Charlotte

Ryan Angi
Red Ventures Data Science & Engineering
6 min readJan 8, 2018

Charlotte is amazing place to live and raise a family, but like every city it has its challenges: skyrocketing home costs, low economic mobility, rising crime rates, and increasing traffic incidents. The Queen City Hackathon was created to bring the impressive technical talent in this city together to use machine learning and data science to help solve some of these problems. Overwhelmingly successful, the Queen City Hackathon became the largest data science hackathon in the southeast with over 250 participants in its first year.

One of the central tenets of our culture and who we are here at Red Ventures is “leaving the woodpile higher than we found it.” When given the chance on a Friday afternoon to stay “a little late” and participate in a 24-hour hackathon, a few friends and I jumped on the opportunity to potentially create something really impactful for the city of Charlotte.

Data Gathering and Cleaning

The specific task of the hackathon was to “build an application using a data science model to improve the lives of Charlotteans.” As a resource, we were provided 7 datasets:

  • Charlotte Agenda News Articles
  • Charlotte Arrest Records
  • Charlotte Reddit Posts
  • Charlotte Traffic Incidents
  • Charlotte 311 Non-Emergency Requests
  • Health Claims Data
  • Speak Up Magazine Articles

With no clear solution in mind, our first step was to understand the available data with exploratory data analysis, which included plotting distributions of the variables to determine which data points were reliable and which data points were sparsely or unreliably coded. Once we had a good understanding of the different datasets we could work with and the useful features we could incorporate in our solution, we became really excited about using the traffic incidents dataset since there was quality data here and less processing of unstructured text was required (allowing us more time to focus on the modeling side of things).

The traffic incident dataset included 1 row for each traffic incident with a “severity of accident” score that we converted to a binary variable (0 for less serve, and 1 for highly severe). This dataset also included latitude and longitude coordinates which allowed us to aggregate our data and group by a 1km grid precision from the latitude and longitude of each traffic accident. From this we determined averages of the types of accidents that occurred within each grid. (i.e. 80% of the accidents in a grid occurred on course asphalt and 90% occurred with no stop sign or stop light present.)

The coordinates from the traffic dataset were then used to join to the coordinates from the 311 non-emergency reports. This extended our feature set from solely information about accidents in a grid of Charlotte to a more holistic picture about the roads in the area. The 311 dataset was mostly text data, so we had to use NLP techniques to extract which reports included information about road information (i.e. frequent reports of potholes or traffic lights out). Geospatial mapping via coordinate systems are extremely useful in data gathering/cleaning, because one can use the coordinates as a key to join together separate datasets to put together a fuller picture of a physical area.

In the exploratory process, one variable we became very interested in predicting was “severity of traffic accidents”. The important part to us was not only to predict which areas in Charlotte have a high likelihood of having a severe accident, but also to build an interpretable model to explain the factors that went into the prediction. City officials could then use these explanations to make the decision to pave a road, add a light, or increase police patrols in the area which could reduce the likelihood of a severe accident.

XGBoost Modeling

Once we completed our data gathering and cleaning, we split our data into a training and validation set and began our process of model iteration and hyper parameter tuning. After several different model types, we found that an XGBoost model fit the data best.

To quickly give a high-level overview of how XGBoost models work, let’s start with gradient boosted machines (GBMs). GBMs are similar to random forests, but instead of building many independent trees and averaging the results, GBMs build many subsequent trees on the residuals of the previous tree. XGBoost is very similar to a GBM algorithm in that it still builds trees of the residuals from the previous tree but it uses a regularization technique to prevent overfitting and improve performance on data it has not seen in training.

The XGBoost algorithm is fantastic at prediction and finding interactions between features. However, historically it has been quite a black box in understanding impact of variables that went into the predictions. To make these models interpretable, we leveraged a package in R called xgboostExplainer. This package is similar to Local Interpretable Model-Agnostic Explanations (LIME), but built specifically for graphical understandings of XGBoost predictions. It is much more insightful than a feature importance plot because it provides the relative positive and negative effects of features on each individual prediction.
The reason this was useful for this hackathon project is because we already know repaving all the roads in Charlotte and adding lights all throughout the city are generally things that prevent severe accidents. What we don’t know is which streets and neighborhoods in Charlotte could benefit from more well-lit streets or more police patrols to prevent speeding. Understanding this helps policy makers to target specific areas with actionable items that might be different from the actionable items needed to reduce the severity of accidents in a different neighborhood.

Shiny

The shiny R package is an amazing tool that allows a data scientist to create a front-end web experience without needing to rely on a front-end developer to write the javascript and CSS. There are several functions in this R package that are wrappers around CSS and javascript functions making it easy to integrate a UI with your R code, data frames, and models. Shiny also gives you the option to create .css or .js files to create class styles and further customize your application (but does not require it). This is perfect for a hackathon, where one is constrained by time.

We opted to use the leaflet package in R to produce the background image and we mapped our longitude and latitude data points on top. We then scored each one of the points on our grid using the XGBoost model which resulted in a prediction from 0 to 1 on the likelihood of a severe accident to occur in that 1km area. We used a gradient color palette to indicate a less severe prediction with green dot and a more severe prediction with a red dot. Reactive values and observe functions inside shiny allow a user to select a data point and the XGBoost Explainer plot on the right-hand side of the application will update with an explanation of what led to the prediction at that one point.

shiny dashboard, colors show severity of accidents

With the variety and quality of data science tools like xgboostExplainer and Shiny, a data scientist can do a lot with limited time. In one night, our team was able to clean and understand our data, build a machine learning model, and host a site to convert model insights into policy suggestions in an interactive application. Using XGBoost and Shiny, we were quickly able to build an accurate model and expose those insights to others.

References

https://shiny.rstudio.com/
https://rstudio.github.io/leaflet/
https://medium.com/applied-data-science/new-r-package-the-xgboost-explainer-51dd7d1aa211
https://github.com/AppliedDataSciencePartners/xgboostExplainer
https://github.com/marcotcr/lime
http://queencityhackathon.com/

--

--