Visualization Process Blog

The software we were given to experiment was tableau. As a senior in high school, I had slight exposure to the program from a friend of mine who worked with it. I chose to be on the Robotics team in my high school, and in particular, I worked on the data team. Because our team was one of the biggest around, that meant that a friend and I would have to collect large amounts of data about other robots, and quantify it all using Tableau. This was somewhat helpful for me, and definitely contributed to my sort of “try it til it breaks” approach. When I was originally given the data set, I began by examining all of the different parameters. A major step for me was discovering the different methods in which I could input and line up the data. The quick visualization panel was a major breakthrough, as it showed me what types of data were necessary to create different visualizations. Once I mastered the use of that panel, I started to really enjoy the software, and began to toss as many different types of data into the program as possible. This let me open up and start comparing everything to everything. I was coming up with interesting ideas such as comparing the geographical location of bike theft to the geographical location of all of the bike racks, but it became difficult as I struggled to combine two different data sets. There were some error messages that I encountered and didn’t fully understand, but of course, with software this powerful it is difficult to draw the line between ease of use and information open on the screen. In general, the process of creating maps went very well for me, as I found it to be the easiest data type to fill out.

In general, in my experience with this program went really well. Kerem’s studio session helped expose me to the ways that Tableau can be utilized whilst giving me enough free space to be able to experience new parts of the software on my own. A question that it rose for me was how people go about testing which presentation of information is the most efficient. I was instantly curious after seeing the demo of the program as to what it was like to create visualizations with big data.

Wildcard: What was one visualization you which you could have put together but were unable to, and who would the user group be?

One of the visualizations that I wish I could have done pertained (once more) to combining data sets. If I had been able to combine data sets, I would’ve crated a visualization that compared the color of the bike racks to the geographical location of incidents involving the homeless. This data set, whilst seeming a tad outlandish and goofy, would be interesting to see. Part of my interest with this project was all the outlandish types of data combinations one could make to attempt to point at strange observations.

A critical part to maintaining any form of civilization or group of people is the ability to keep the members informed and on the same page in terms of all sorts of critical metrics. These can vary from GDP to the abundance or shortages of drinking water, and all have an impact on the way people in society form opinions. When the only way to keep people informed is through television news, news paper, and online articles, the necessity of effective data visualizations is higher than ever. Especially now that we have technology to quantify and marginalize big data, the use of that software is absolutely essential to keeping the population informed. As people take news in, they seem to ignore the fact that different sources might attempt to distort the data. By providing powerful visualization software such as this to the public, we are giving them the ability to create and observe data in whichever ways they seem fit, and this is ultimately the best way to go.

Final visualizations can be observed: HERE