Basic Analytics in Plenario: Amendment 64

A look at crime in Denver, Colorado

Colorado’s now-famous Amendment 64 (recreational marijuana legalization) was the subject of national scrutiny for much of 2012, having been proposed in February and approved by voters in November. Proponents of the bill claimed approval would allow Colorado’s police forces to focus more on violent crime instead of petty acts like marijuana possession. Amendment opposition stated that legalization would lead to increased illegal drug use, therefore stretching law enforcement thinner than they already were.

Amendment 64 went into effect on January 1st, 2014; here’s the first Colorado citizen to exercise his new privilege (from imgur):

Sean Azzariti makes the first legal recreational marijuana purchase in the United States.

Sean certainly doesn’t look too scared about an uptick in crime. With a bit of analytics, we’ll see whether or not his euphoria should be dampened. We can use the Plenario API to answer this question: was Amendment 64 effective in lowering drug crime?

Let’s see the state of drug crime before Amendment 64. Consult the Denver PD list of offense codes (that’s a download link) for our filter values. Values from 3500 to 3600 represent drug-related crimes of any kind. This range includes the vast majority of drug crimes (possession, manufacture, sale, etc.). With that knowledge, let’s call Plenario:

http://plenar.io/v1/api/timeseries/?obs_date__le=2014-01-01&dataset_name=crime_csv&obs_date__ge=2012-01-01&agg=week&crime_csv__filter={"op":"and","val":[{"op":"ge","col":"offense_code","val":"3500"},{"op":"le","col":"offense_code","val":"3600"}]}

Now we’ve got the first half of our counts. Brilliant. Change the obs_date__ge and obs_date__le values to reflect a two year difference (2014–16 instead of 2012–14) and call again. I’ll put the GET request here for uniformity’s sake:

http://plenar.io/v1/api/timeseries/?obs_date__le=2016-01-01&dataset_name=crime_csv&obs_date__ge=2014-01-01&agg=week&crime_csv__filter={%22op%22:%22and%22,%22val%22:[{"op":"ge","col":"offense_code","val":"3500"},{"op":"le","col":"offense_code","val":"3600"}]}

So, we’ve got a couple of formatted JSONs with our counts inside. Our data is machine-readable and ready to be analyzed, but the ‘how’ may not be immediately apparent. A small snippet of Python contains our solution.


Our code is going to parse our JSON, leaving us with only the counts we want. If you haven’t downloaded Python (we’re using Python 2), do so now using Anaconda; a text editor will be helpful, as well. Figure out where in your computer you’ll be working. I have a directory called plenario_api for everything relating to calls and the analysis I’m currently working on. Once you’re there, boot up your text editor and study this code, noting the comments (marked with ###):

This function will extract the weekly crime counts from Plenario’s response. The response includes more data, like the week each individual count belongs to. That information would be nice if we wanted to make a timeseries graph, but we only want the counts in order to run a T test on them. This parser goes a step further for us; its last line actually includes the extent of our analysis.

That program is located on a Github repository I made for this blog; you can clone that repository (instructions in a second, if needed) and it will deposit the script into a new folder called blogstuff. The program, fittingly, is named parser.py. Open up your terminal/command prompt and navigate to the folder of your choice. Then, follow this, using the comments as guidelines; “In” is a line for iPython input, “Out” displays output:

### The first two lines of code are optional; feel free to use   ###
### your text editor to make your own .py files in your folder. ###
git clone https://github.com/carhart/blogstuff.git
### Now, change your directory to blogstuff...                   ###
cd blogstuff
### Install an analytics package (scipy) using pip...            ###
pip install scipy
### And get iPython up and running.                              ###
ipython
### iPython will configure itself, then you're ready to work. We ###
### only require one actual line of code within Python. ###
In [1]: run parser.py
Out[1]: (array(-10.73), 1.09e-21)

In the “Out[1]:” line, we see our first set of results: a large negative T-stat and an incredibly small p-value. Thus, the pre-and-post-amendment counts are very clearly different from each other, no matter the significance level. You may notice that we ran a two-tailed test, meaning we technically can’t determine whether crime increased or decreased. You also, however, should notice this: the T-stat is negative. Luckily for us, Python maintains signs and context throughout this type of hypothesis testing. Given a sufficiently small p-value (less than 0.05, typically), a positive T-statistic means we can reject the null hypothesis for an upper-tailed (>) test, while a negative statistic allows us to reject the null for a lower-tailed (<) test.

Therefore, it does not appear that Amendment 64 lowered drug crime through the beginning of 2016. Really harshes the mellow. Further econometric analysis of different factors in Colorado during this timespan would shine more light on the issue, but at a basic level, our results paint a bleak picture for Amendment support.


The same analysis can be done with different types of crime, altering the API calls in the .py files as needed. Use different offense codes for violent crime, or use an entirely different column; for example, you can filter the dataset with {“op”:”eq”,”col”:”offense_category_id”,”val”:”public-disorder”} to find property damage data. With a couple of time filters, you’d be able to see whether property damage crime increased around the time of the Denver Broncos’ Super Bowl 50 win and subsequent championship parade (I’d assume so).

The amount of pure analytics contained herein is minimal. It’s achieved in 64 characters using one function. The scipy library is absolutely vast, and data from Plenario (when properly parsed) can be used to obtain meaningful results from much of it. Find some interesting modules on your own and test them out — even experienced ‘program-alysts’ often discover new ways to solve problems. Enjoy yourself!


Helpful links for Python analytics: