Day 11: Cleaner data, .encode, and beyond!

Roo Harrigan
Making Athena
Published in
2 min readNov 6, 2015

>>> Brief summary

Today was the day I gave up my fight against the Wikipedia API and moved to a friendlier method of data aggregation. I found a nice open-source API called REST Countries on mashape.com and in an afternoon I rewrote my seed.py and tweaked my routes to work with my new data. Because I was getting all the special characters out of the API, I was able to .encode(‘utf-8’) them all before storing them in my database, and they display quite nicely in my new and improved quiz:

Sweet, sweet accent mark!

I now have country, continent, capital, primary language, and demonym information for the 193 member states of the U.N., plus Palestine. Which means tomorrow I can make

I also created a new class model/table in my database to store quizevents every time someone takes a quiz, so I can start gathering user-level information and Athena-level data as well. If I can put together a quick graph of that info tomorrow, then MVP will be complete!

>>> Where I struggled

Creating a relationship between two fields in separate tables in my postgreSQL table. SQLAlchemy documentation is hard to read and the examples are not intuitive — I can write the pure SQL I need to update the quizzes_taken column for each user, but I can’t figure out how to get it into my Python.

Also, I realized that this was the type of thing I might want to do at the database level (every time a new quizevent gets added, I want to increment the number of quizzes taken for that User by 1) but I couldn’t get a foothold on where to start learning about that. Back to the postgreSQL docs tomorrow.

>>> Thoughtful takeaway

I spent the last 30 minutes beating myself up about what I didn’t accomplish today and giving up on Wikipedia, but upon reflection while writing this post I realize that today was a momentous day!

To all my Hackbrighters — don’t forget to stop and look around.

What I’m wearing right now.

--

--