My Python journey, Pt. 1: Didn’t win this race

Like many people all over the world, I want to become a data scientist. I joined Chicago Python Users Group (ChiPy) in fall of 2017 to up my Python game in pursuit of this goal.

Shortly after I started attending the ChiPy meetings, I heard about the ChiPy mentorship program, which pairs people of all levels who are learning Python with Python mentors who are generally professional developers. Over the course of about four months, you meet weekly with your mentor to complete a Python project of your choice. You also write three blog posts (number one here!) and attend the ChiPy Project Nights to participate in special workshops about different Python topics.

I’m not an experienced developer, but I’ve been working on a Python project of my own already, so this mentorship program seemed like the perfect fit for me. I jumped at the chance to have a real developer to help me complete the project; I applied and was accepted.

My project idea came from my favorite economics professor at the University of Washington. What we had in mind was something like FRED’s site, but for the state of Washington. FRED (Federal Reserve Economic Data) publishes all sorts of U.S. economic indicators in a very user-friendly, interactive format.

FRED has a great website that allows users to interactively view a large variety of public economic data.

I had two main data sources: the Bureau of Economic Analysis and the U.S. Census Bureau. Both have a large collection of data that is made available via Web API. I decided to tackle the BEA first.

My first accomplishment was this simple (but very cool to me) Python script that automatically made a data request to the BEA API, grabbed the data, and stored it in a file. This example is the GetParameters method of the API, which tells you what parameters are available for a specific dataset.

My only technical background is in SQL, so I decided to import all of the metadata about the BEA datasets into a local PostgreSQL database, just because it was easier for me to work with.

My PostgreSQL view of the parameters available for each BEA dataset.

Here is the code I used to migrate the json data into Postgres:

This was about as far as I got before getting stuck on the Web dev portion. Then, when I checked the BEA site at the beginning of the mentorship, they had released an interactive data feature.

Mapping is one of the features that the BEA recently released.

This was exactly what I was planning on creating, except that the BEA developers made it look much nicer than I probably would have.

Jay, my mentor, said this: “Write your blog post about the fact that the BEA beat you to it. In real life, this happens all the time.”

So now I’m in the market for a new project, but I think it was a lesson worth learning: if you have a good idea, there’s a solid chance other people are working on it, too.

Huge thanks to Ray, who runs the ChiPy mentorship, and Jay, my mentor, for the time and effort they give to the program.