<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Sanchit Kumar on Medium]]></title>
        <description><![CDATA[Stories by Sanchit Kumar on Medium]]></description>
        <link>https://medium.com/@sanchit_kumar?source=rss-49f9005ee6d5------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 16:11:11 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@sanchit_kumar/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[My Experience with Gephi — Visualizing Game of Thrones: A Song of Ice and Fire]]></title>
            <link>https://medium.com/@sanchit_kumar/my-experience-with-gephi-visualizing-game-of-thrones-a-song-of-ice-and-fire-61587eefba9d?source=rss-49f9005ee6d5------2</link>
            <guid isPermaLink="false">https://medium.com/p/61587eefba9d</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[game-of-thrones]]></category>
            <category><![CDATA[gephi]]></category>
            <category><![CDATA[network]]></category>
            <category><![CDATA[data-analysis]]></category>
            <dc:creator><![CDATA[Sanchit Kumar]]></dc:creator>
            <pubDate>Wed, 07 Nov 2018 17:07:48 GMT</pubDate>
            <atom:updated>2018-11-07T17:07:48.248Z</atom:updated>
            <content:encoded><![CDATA[<h3>My Experience with Gephi — Visualizing Game of Thrones: A Song of Ice and Fire</h3><p>Using data visualization techniques and Gephi tool to visually analyze Game of Thrones characters and their interactions</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cV72-6kLj-W9aa6t.jpg" /></figure><h3>Introduction</h3><h4>The Idea</h4><p>It is a well-known fact that Game of Thrones, a series of novels written by George R.R. Martin and its television counterpart, have deeply enticed several fans all over the world. With a plethora of characters and plots, the story is highly intricate and incredibly alluring. As a huge fan myself, having participated in many discussions with friends predicting what’s going to happen next, I thought it was an exciting opportunity for me to perform a Game of Thrones character network analysis using Gephi to figure out who are the main protagonists that emerge and how their interactions have been with others throughout the series of books.</p><h4>The Inspiration</h4><p>I began my research by searching for articles on network data visualizations, and one of the best sources I could find was <a href="http://www.martingrandjean.ch"><strong>Martin Grandjean’s blog</strong></a>. Martin Grandjean is a Researcher in History and Digital Humanities at the University of Lausanne and his blog consists of numerous articles on data analysis and visualization. In one of his posts, ‘<a href="http://www.martingrandjean.ch/complex-network-visualisation-interdisciplinarity/"><strong>Complex network visualization for the history of interdisciplinarity: Mapping research funding in Switzerland</strong></a>’, Martin analyzed various interdisciplinary Swiss National Science Foundation projects and their funding in Swiss Francs. The result was spectacular and really helped shape my own visualization. I found Martin’s use of a dark background to create excellent visual contrast for a boosted readability of the network highly interesting. The legend added to the visualization also made everything easy to understand. Furthermore, the fact that only significant nodes were labeled created a clutter-free visualization making it more usable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OoYkT_OkHFPFdp_O.png" /><figcaption><em>Martin Grandjean — Mapping research funding in Switzerland</em></figcaption></figure><p>While performing my research on Game of Thrones network data analysis, I came across <a href="https://networkofthrones.wordpress.com/the-novels/a-song-of-ice-and-fire/"><strong>Network of Thrones — A Song of Math and Westeros</strong></a>, a blog by<strong> </strong><a href="https://www.macalester.edu/~abeverid/index.html"><strong>Andrew Beveridge</strong></a><strong> </strong>(Associate Professor at Macalester College). His blog post has the most detailed network analysis of Game of Thrones characters, covering a season-wise analysis of the television series as well as a novel-wise analysis of each of George R. R. Martin’s books. This was a very interesting way to highlight the differences between the novels and the television series. The use of different colors for nodes and edges was also a great way to signify clusters.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*w6YTxyvMsa_L-RCA.png" /><figcaption><em>Network of Thrones — Andrew Beveridge</em></figcaption></figure><p>Yet another amazing article I came across was <a href="http://www.martingrandjean.ch/connected-world-air-traffic-network/"><strong>Connected World: Untangling the Air Traffic Network</strong></a><strong>. </strong>This was again written by <a href="http://www.martingrandjean.ch"><strong>Martin Grandjean</strong></a> where he maps out the connections between all airports around the world by creating a spectacular network visualization. What I found most interesting was the fact that his visualization was very similar to a geographically accurate representation of the dataset. This indicated how clustering in network graphs can be very similar to geographical locations pertaining to the nodes, and I quote Martin Grandjean — “<em>A network, in its very essence, is already a map</em>”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/759/0*EswWtMfcgk2jBg3x.gif" /><figcaption><em>Connected World: Untangling the Air Traffic Network — Martin Grandjean</em></figcaption></figure><h3>Materials</h3><h4>Tools -</h4><p><a href="https://gephi.org/"><strong>Gephi</strong></a> (Windows version 0.9.2) — An open-source software for network visualization and analysis. I used this software to create and analyze my network data visualization.</p><p><a href="https://products.office.com/en-us/excel"><strong>Microsoft Excel</strong></a> — A spreadsheet software by Microsoft, part of the MS Office suite. I used it for viewing the dataset CSV files and understanding them before plugging them into Gephi.</p><h4>Dataset -</h4><p>The dataset I used was taken from <a href="https://github.com/mathbeveridge/asoiaf"><strong>Andrew Beveridge’s GitHub — A Song of Ice and Fire Page</strong></a></p><p>The dataset was created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in the series of “A Song of Ice and Fire” novels (volumes 1 through 5). The edge weights correspond to the number of interactions between these characters. The details of the dataset are -</p><ul><li>Nodes — 796 (Characters in “A Song of Ice and Fire”)</li><li>Edges — 2823 (Undirected interactions between these characters)</li></ul><h3>Process</h3><p><strong><em>Feeding the dataset into Gephi and running some statistics</em></strong></p><p>The first step in my process was understanding the two CSV files consisting of the node table and edges table, which I downloaded from Andrew Beveridge’s GitHub page. Once the dataset made sense to me, I proceeded with importing the nodes and edges tables into Gephi. By default, the tool generated a visualization, but it did not seem to make sense as all the nodes were in one cluttered mess. Therefore, in order to improve the usability, I needed to perform some layout improvements as well as aesthetic improvements.</p><p><strong><em>Improving the layout of the visualization</em></strong></p><p>After this, in order to improve the visualization, I ran the <a href="https://github.com/gephi/gephi/wiki/Force-Atlas-2"><strong>Force Atlas 2 algorithm</strong></a> multiple times, testing out various permutations and combinations of its parameters. I found that checking the ‘Dissuade Hubs’ setting allowed the node clusters to spread out thereby making the whole network appear cleaner. I also tuned the settings to have a scaling factor of 100.0 and ‘prevent overlap’ setting checked, in order to ensure that everything was nicely spread out and there was enough room to accommodate labels and there were no overlaps of the nodes. Furthermore, I also set the ‘gravity’ to 100.0 to bring nodes that were very far apart, closer to the spacialization space. Once the Force Atlas 2 algorithm made the layout a lot better, there was still a huge scope for improvement as the visualization was still quite difficult to understand. To remedy this, I performed the <a href="https://www.packtpub.com/mapt/book/big_data_and_business_intelligence/9781783987405/3/ch03lvl1sec35/using-the-expansion-layout-algorithm"><strong>Expansion algorithm</strong></a><strong> </strong>several times by a scale factor of 2x each, to reach the right point where the nodes were nicely spread out without making the whole visualization too large. Finally, I also ran the <a href="https://www.packtpub.com/mapt/book/big_data_and_business_intelligence/9781783987405/3/ch03lvl1sec39/using-the-label-adjust-layout-algorithm"><strong>Label Adjust algorithm</strong></a>, to make sure there was enough room for the labels of each of the nodes, without any overlap.</p><p><strong><em>Playing around with colors and weights</em></strong></p><p>Next, I started thinking about how the use of colors and weights could improve the visualization. I applied a color partition on the nodes by ‘modularity class’ and selected an appropriate palette for them. This gave the same color to the nodes that were more densely connected together as compared to the others. I also set the node size ranking on the ‘degree’ to make those nodes larger that had more connections (or a higher degree). I set the labels to grow in proportion to the node size as well so that more prominent nodes were easier to read. As I was inspired by Connected World: Untangling the Air Traffic Network by Martin Grandjean, I made use of a black background for my visualization to create a good contrast. This way, the nodes, and the edges were clearer and were more visually appealing. I gave the labels a standard color of white, which would be easy to read above the node and edge colors as well as the black background. Finally, I also set the color of the edges to be a mix of the colors of each node pairs. This way it was easy to tell just by looking at an edge, whether the node was connected to a node belonging to the same cluster (Based on modularity class) or a different one. The edge weights were also set to vary based on the strength of the connection, which was retrieved from the dataset, which corresponds to the number of interactions between the characters in the book.</p><h3>Results &amp; Analysis</h3><p>The final visualization rendered on Gephi after the above-mentioned series of steps and tweaks is shown below. You may also download the <a href="https://drive.google.com/file/d/1_9kYTi7ww6ybB6kJ429dZPQvkGppOZSl/view?usp=sharing"><strong>full resolution visualization</strong></a><strong> </strong>to zoom and inspect the intricacies (Note — Please allow it to render as it is a large file).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UD7yRMj88XIVztNS.png" /><figcaption>Gephi Visualization — Game of Thrones: A Song of Ice and Fire — Character Network</figcaption></figure><p>Some important statistics for my visualization were -</p><ul><li>Average Degree: 7.093</li><li>Average Weighted Degree: 81.982</li><li>Graph Density: 0.009</li><li>Network Diameter: 9</li><li>Modularity: 0.566</li></ul><p>By looking at the visualization, I was amazed to find that Tyrion has been the most prominent protagonist throughout the series, closely followed by Jon Snow. Although Jon Snow was quite predictable, I always had a strong feeling that Tyrion would be one of the victors (or should I say survivors) at the end of the story. After seeing these results, I am quite certain that Tyrion is going to do great things in the upcoming season.</p><p>Although I had also felt that Daenerys, the Khaleesi herself, might end up on the Iron Throne, her slightly less significant presence in the visualization seems to contradict my prediction. However, it is also true that Khaleesi had been on a different side of the map altogether for a quite significant period of time, reducing her interactions with the other characters deeply involved in the Kings Landing and Winterfell. Her absence from Westeros, therefore, could be one of the reasons behind the slightly smaller node representation of Daenerys.</p><p>Another interesting conclusion that can be drawn from the visualization is the fact that the location of the nodes and how they cluster together has a strong connection with where the characters were geographically placed in the land of Westeros and its surrounding regions. For instance, Jon Snow, Samwell Tarly and others are all clustered up, representing the brothers of the Night’s Watch, whereas, Khaleesi, Drogo, and Jorah Mormont are all clustered representing the Essos area, where their story runs in parallel. Also, the Lannisters are clustered up representing their presence in the Kings Landing region and most of the Starks, representing Winterfell.</p><p>Finally, it is also amazing to see how Arya Stark is also quite significant in the visualization and her cluster is comprised of lesser-known characters from Braavos, but she still has a strong weighted connection with Sansa Stark, with whom, as per the story, she maintains touch.</p><h3>Reflection &amp; Future Direction</h3><p>Gephi is an open source software and I really appreciate its ability to render network visualizations so quickly and effectively, free of cost! However, I still believe that there’s a huge scope for improvement for the overall interface, which is extremely dated and not easy to use. Furthermore, a lack of the well known undo feature was a huge inconvenience. For instance, if I performed an action that was undesirable, I needed to reload the last save point and redo all the steps. This slightly suppressed my desire to explore the tool in depth, as experimentation with various actions was a cumbersome effort.</p><p>For this project, I covered a combined dataset of volumes 1 through 5 of the series “A Song of Ice and Fire”. On his GitHub page, Andrew Beveridge had also uploaded datasets for each of the volumes individually. I believe It would be a great future project for me to perform a book-wise analysis of the character network and highlight the changes as the story progresses.</p><p>I would also like to find a dataset that covers the television series — Game of Thrones so that I can map the network of characters on a per season basis and analyze it. Furthermore, it would also be very interesting to compare and contrast the novels versus the television series and find out how the two differ.</p><p>Finally, a whole new dataset awaits! Game of Thrones season 8 is due to be released in 2019 and I can’t wait to grab the dataset once the entire season is aired. It would be fun to analyze the final season, hopefully not too many characters would die due to the white walkers and I would still have a significant number of nodes to work with!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=61587eefba9d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Exploring Timeline JS — Noteworthy Infographic Inventions from the 19th Century]]></title>
            <link>https://medium.com/@sanchit_kumar/exploring-timeline-js-noteworthy-infographic-inventions-from-the-19th-century-2a896a5b26f5?source=rss-49f9005ee6d5------2</link>
            <guid isPermaLink="false">https://medium.com/p/2a896a5b26f5</guid>
            <category><![CDATA[charts]]></category>
            <category><![CDATA[information-visualization]]></category>
            <category><![CDATA[timeline]]></category>
            <category><![CDATA[infographics]]></category>
            <category><![CDATA[data-visualization]]></category>
            <dc:creator><![CDATA[Sanchit Kumar]]></dc:creator>
            <pubDate>Mon, 29 Oct 2018 14:56:50 GMT</pubDate>
            <atom:updated>2018-10-29T14:56:50.033Z</atom:updated>
            <content:encoded><![CDATA[<h3>Exploring Timeline JS — Noteworthy Infographic Inventions from the 19th Century</h3><p>Using Timeline JS to take a look at noteworthy infographic inventions from the 19th century that we heavily use today</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5CMz-4QqRmEZVOoK.png" /></figure><h3>Introduction</h3><p>Infographics, a term created by clipping and compounding the words ‘Information’ and ‘Graphics’, are visual or graphic representations of information which allow clear and swift communication. They leverage human beings’ visual system’s ability to identify patterns and trends.</p><p>Creating infographics is not a new technique in any way, as the earliest storytellers were the cavemen that lived on the earth forty thousand years ago. These cave dwellers took up their brushes in the form of cave paintings, which told stories about various events that they thought were noteworthy. This was hence, an act of using graphics to present information so as to communicate with others, which is what we define as ‘Infographics’.</p><p>There have been many iconic infographic inventions in the past that were so robust, that they are still widely used today with minimum, or in some cases, no modifications at all. It is very intriguing to learn about the inventors’ creative genius that led to these incredible information visualizations. I, therefore, decided to create a timeline view of some of these spectacular inventions belonging to the 19th century, which I found to be the most relatable due to my background in Engineering and Technology, having used many of these myself in the recent past. My goal is to showcase these fantastic time-tested inventions in a chronological order, using a popular information visualization tool.</p><h3>Materials</h3><h4>Tools -</h4><ul><li><a href="https://timeline.knightlab.com/"><strong>TimelineJS</strong></a> — An open source tool developed by Northwestern University’s <a href="https://knightlab.northwestern.edu/"><strong>Knight Lab</strong></a><strong> </strong>that allows users to create interactive timelines by simply filling up a spreadsheet.</li><li><a href="https://www.google.com/sheets/about/"><strong>Google</strong> <strong>Sheets</strong></a> — A web-based application that allows users to create, update and modify spreadsheets and share the data online. This tool also allows collaboration and is a part of Google’s G-Suite of web applications.</li><li><a href="https://products.office.com/en-us/onenote/digital-note-taking-app"><strong>OneNote</strong></a><strong> </strong>— An application that allows users to create notebooks in which they can store, organize, and search resources and notes. It is a part of Microsoft’s Office 365 suite.</li></ul><h4>Dataset –</h4><ul><li><a href="http://infowetrust.com/scroll/"><strong>RJ Andrews, “History of Infographics”</strong></a></li></ul><h3>Methodology</h3><p>I began my study by analyzing <a href="https://infowetrust.com/scroll/"><strong>RJ Andrew’s “History of Infographics</strong></a><strong>”</strong>, which was shared by my information visualization professor. It is a wonderful data source to find out about iconic inventions in the world of infographics in an interactive fashion. I started reading blurbs about each of the inventions on the website, while simultaneously making a list of the ones that really piqued my interest. Some of these gave me a huge nostalgia trip as well, as I had used them several years back when I was pursuing my Bachelor’s in Engineering — for instance, Mendeleev’s Periodic Table, which I used to use quite a lot to fetch elements and figure out their atomic number.</p><p>With a lot of Google searching, I was able to find good articles about the visualizations that I had selected to showcase, so as to fetch the clearest definitions I could find to put onto the timeline. This helped me learn more about these visualizations and their variations, for instance, the pie chart has a variant that helps visualize hierarchical data, by using concentric circles. This variant is known as the ‘multilevel pie chart’.</p><p>My next step was to choose appropriate sample images for these visualizations in order to effectively communicate their appearance and use. Yet again, I made use of the articles I had previously visited to find good sample images. Google image search also helped me discover good images that I could use.</p><p>As I was collecting all the aforementioned data points, I was simultaneously entering them into my OneNote app in an organized manner. After this was complete, I transferred the data over to the Timeline JS spreadsheet [Fig 1] and followed the steps on their website to generate the timeline. I made several changes thereafter to modify the look and feel by changing the images used to improve readability and effectiveness.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bQyxNo80tADkEQ2F.png" /><figcaption>[Fig 1: Google Sheets Template]</figcaption></figure><h3>Results</h3><p>Please access –</p><ul><li><a href="https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1v4c0hXCJrvTwVjDK9B8xD-UE2cqQM2tcXVm9I35cGdM&amp;font=Default&amp;lang=en&amp;initial_zoom=2"><strong>TimelineJS Project — Noteworthy Infographic Inventions from the 19th Century</strong></a></li><li><a href="https://docs.google.com/spreadsheets/d/1v4c0hXCJrvTwVjDK9B8xD-UE2cqQM2tcXVm9I35cGdM/edit?usp=sharing"><strong>Google Sheets Template</strong></a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dU348UFnQ4gP-UcEg89wog.png" /><figcaption>[Fig 2: Gallery of screenshots of my TimlelineJS Project]</figcaption></figure><p>From this timeline [Fig 2], I hope to make viewers aware of how far back in time these amazing inventions were made in the field of Infographics. I want them to see how robust these were and how little or nothing has changed about them since their invention. I hope my use of imagery helps them understand the visualizations at first glance and learn how the 19th century contributed to the world of infographics.</p><h3>Reflection &amp; Future Directions</h3><p>I always enjoy the feeling of learning a new tool or technique and TimlineJs was no exception. I felt that it is truly a spectacular tool which someone having no technical background can also use. However, my technology background really made me gauge the shortcomings of this tool based on my own experience of HTML and JavaScript. I constantly felt the lack of user interface customizations, which makes the bottom section of the timelines look almost the same for many users. This way anyone who is familiar with the tool can tell at first glance that TimelineJS was used. This brings down the uniqueness factor significantly.</p><p>Yet another pain point of mine was that although most of my events were coincidentally evenly spaced to about a decade in between each, one event was almost two decades apart, making it appear as an awkward long jump on my timeline. I believe that a way to show an axis break (or kink) between larger gaps and shortening them a bit could be a good idea, making it look more evenly spaced.</p><p>Finally, I invested a lot of time in selecting background images for my timeline. However, I later realized that the images, even though subtle, were huge sources of distraction and also made the text harder to read. I, therefore, decided to remove the background images and use a subtle background color instead. I will definitely avoid this in the future.</p><h3>References</h3><ul><li><strong>RJ Andrews, History of Infographics</strong><br><a href="https://infowetrust.com/scroll/"><strong>https://infowetrust.com/scroll/</strong></a></li><li><strong>University of Leicester</strong><br><a href="https://www2.le.ac.uk/offices/ld/resources/numerical-data/pie-charts"><strong>https://www2.le.ac.uk/offices/ld/resources/numerical-data/pie-charts</strong></a></li><li><strong>Study.com</strong><br><a href="https://study.com/academy/lesson/what-is-a-stacked-bar-chart.html"><strong>https://study.com/academy/lesson/what-is-a-stacked-bar-chart.html</strong></a></li><li><strong>Wyzant.com</strong><br><a href="https://www.wyzant.com/resources/lessons/math/statistics_and_probability/averages/cumulative_frequency_percentiles_and_quartiles"><strong>https://www.wyzant.com/resources/lessons/math/statistics_and_probability/averages/cumulative_frequency_percentiles_and_quartiles</strong></a></li><li><strong>Berkeley</strong><br><a href="https://evolution.berkeley.edu/evolibrary/article/0_0_0/evotrees_primer_02"><strong>https://evolution.berkeley.edu/evolibrary/article/0_0_0/evotrees_primer_02</strong></a></li><li><strong>Chegg</strong><br><a href="https://www.chegg.com/homework-help/definitions/normal-curve-31"><strong>https://www.chegg.com/homework-help/definitions/normal-curve-31</strong></a></li><li><strong>Lucid Chart</strong><br><a href="https://www.lucidchart.com/pages/organizational-charts"><strong>https://www.lucidchart.com/pages/organizational-charts</strong></a><br><a href="https://www.lucidchart.com/pages/venn-diagram"><strong>https://www.lucidchart.com/pages/venn-diagram</strong></a></li><li><strong>Live Science</strong><br><a href="https://www.livescience.com/25300-periodic-table.html"><strong>https://www.livescience.com/25300-periodic-table.html</strong></a></li><li><strong>Population Education</strong><br><a href="https://populationeducation.org/what-population-pyramid/"><strong>https://populationeducation.org/what-population-pyramid/</strong></a></li><li><strong>Wikipedia — Poverty Map</strong><br><a href="https://en.wikipedia.org/wiki/Poverty_map"><strong>https://en.wikipedia.org/wiki/Poverty_map</strong></a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2a896a5b26f5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The UX of the Smart Car]]></title>
            <link>https://medium.com/@sanchit_kumar/the-ux-of-the-smart-car-78910b0ccbc9?source=rss-49f9005ee6d5------2</link>
            <guid isPermaLink="false">https://medium.com/p/78910b0ccbc9</guid>
            <category><![CDATA[smart-cars]]></category>
            <category><![CDATA[smart]]></category>
            <category><![CDATA[iot]]></category>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[cars]]></category>
            <dc:creator><![CDATA[Sanchit Kumar]]></dc:creator>
            <pubDate>Mon, 29 Oct 2018 14:40:14 GMT</pubDate>
            <atom:updated>2018-10-29T14:40:14.846Z</atom:updated>
            <content:encoded><![CDATA[<p>Various challenges in smart cars and how UX Design helps</p><p>Remember the television show of the 80s called Knight Rider? If yes, I’m sure you are picturing the uber-cool black colored Pontiac Firebird with the blinking red lights. The show was a hit, mostly thanks to the futuristic feature-packed beast of a car that assisted its undercover-cop owner in solving crimes. The car could self-drive, follow voice commands, make decisions and so much more. In today’s times, big players like Tesla Motors, Mercedes-Benz and Audi are working strongly to make cars smarter by the day, and we can see the Knight Rider car become a reality. Equipped with various sensors and complex electronic systems, cars can assist drivers with near self-driving capability. The responsibility of making decisions for safety and navigation is shifting from the driver to the artificially intelligent systems. Easy integration of software such as Android Auto and Apple CarPlay with the in-car systems allows adding the ‘Smart’ factor to almost any car. With this rapidly changing landscape where consumers are being exposed to smarter vehicles, several challenges arise, many of which can be tackled by User Experience techniques.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7hu7g-EfohJYiIlN.jpg" /><figcaption>Source: <a href="https://zero.eu/en/eventi/77408-back-to-the-90s,roma/">https://zero.eu/en/eventi/77408-back-to-the-90s,roma/</a></figcaption></figure><h3>UX Challenges in Smart Cars</h3><p><strong>Driver Distraction</strong></p><p>Infotainment systems in cars are extremely useful for viewing and controlling various settings and features as well as managing the entertainment systems. However, driver interaction with these systems can be a source of distraction. Wired connectivity of the infotainment unit with personal devices can also cause physical obstruction in some cases, not to mention the painful process of managing cables and finding difficult to access connection ports. These issues can not only be frustrating for the driver but can also cause obstruction or distractions that could lead to accidents (Reuters, <a href="https://www.reuters.com/article/us-autos-displays-safety-insight/car-dashboards-that-act-like-smart-phones-raise-safety-issues-idUSKCN0PH0BO20150707">Car Dashboards that act like smartphones raise safety issues</a>, July 2015).</p><p><strong>Trust &amp; Confidence</strong></p><p>With advanced systems on board, cars these days can predict collision scenarios and detect pedestrians among various other intelligent actions. When the car makes several of the decisions for you, it is quite possible that a lack of trust or confidence could creep in (Mashable, <a href="http://mashable.com/2017/10/10/you-will-want-to-ride-in-a-self-driving-car/">Self-Driving Cars Still Need to Earn the Public’s Trust</a>, Oct 2017). Questions like “What if the system doesn’t work?” and “What if there is a malfunction?” are quite common when users first experience smart cars.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Ep7rOeon5PBvdQtR.jpg" /><figcaption>Source: Credit: Tesla Motors</figcaption></figure><p><strong>Perceived Safety</strong></p><p>Drivers or passengers can develop high levels of perceived safety due to the powerful and promising capabilities of smart cars. While it is a good idea to make people feel safe, it can also lead to negative results, such as drivers relying completely on the onboard systems and not paying sufficient attention during dangerous situations (Auto-ui.org, <a href="https://www.auto-ui.org/12/adjunct-proceedings/t5-ux-design-vehicles-tscheligi.pdf">User Experience Design for Vehicles</a>, 2012)</p><p><strong>Driver Emotion</strong></p><p>The driver’s emotional state is an important issue for automotive safety. Emotions affect perception and action, and hence, they are extremely relevant. Several driving behaviors are negatively affected by emotions, linking stress, anger or aggression to accidents. Frustrations arising due to poor usability of in-car interfaces and controls or difficulty navigating can also contribute heavily to a negative emotional state (Auto-ui.org, <a href="https://www.auto-ui.org/12/adjunct-proceedings/t5-ux-design-vehicles-tscheligi.pdf">User Experience Design for Vehicles</a>, 2012).</p><h3>How UX Design is Making Smart Cars Better</h3><p><strong>User (Driver) Research</strong></p><p>There are many contexts possible for a driver, be it a slippery drive on the rainy roads of Portland or a foggy encounter on the crests and troughs of San Francisco. If these contexts are not kept in mind while designing for smart cars, there is a high chance for the design to fail, and failure in a driving scenario could have terrible consequences. Research is hence essential, where UX Practitioners observe and interview people driving in real-world scenarios, while unveiling their pain points and understanding their behavior (Key Lime Interactive, <a href="http://blog.keylimeinteractive.com/user-research-methods-for-automotive-user-experience">User Research Methods for Automotive User Experience</a>, Oct 2015).</p><p>Google, for example, conducts its research by simulating real-world scenarios and combining it with special eye-tracking sensors to understand how drivers interact with Android Auto (Android Central, <a href="https://www.androidcentral.com/android-auto-research-lab-drives-its-way-new-video">This is how Google Tests the Safety of Android Auto</a>, 2016).</p><p><strong>Augmented Reality (AR)</strong></p><p>Many companies these days are working on technologies and techniques to augment reality for the drivers to prevent distractions and instill a sense of confidence. This is often achieved by creating overlaying interfaces that either appear directly on the windscreen of the vehicle or appear on top of a real-time view of the environment around the car displayed on the infotainment screens. The interface elements are intentionally made translucent, allowing good visibility of the environment with minimal obstruction. UX designers make use of typography, design principles and user research to create and refine the design of the overlaying interfaces for clarity and understandability (CNET — Roadshow, <a href="https://www.cnet.com/roadshow/news/augmented-reality-in-the-car-steps-towards-production-at-ces-2017-harman-continental-visteon/">Augmented Reality in the Car Steps Towards Production at CES 2017</a>, Jan 2017).</p><p>Visualizations and graphical representations are heavily used to allow quicker recognition through peripheral vision, eliminating the need to fixate on the interface rather than on the road. These techniques also allow giving users timely feedback for any decisions made by the car’s artificial intelligence and displaying information about pre-determined decisions beforehand. This allows building trust with the user, as it eliminates the element of surprise when the car makes a decision.</p><p>An example of AR in vehicles is Harman’s LIVS concept showcased at CES 2017, which is capable of placing street signs over intersections, greatly helping drivers (Harman, <a href="http://news.harman.com/releases/harman-unveils-the-ultimate-in-car-experience-for-intelligent-intuitive-autonomous-driving-at-ces-2017">HARMAN Unveils the Ultimate In-Car Experience for Intelligent, Intuitive Autonomous Driving at CES 2017,</a> 2017).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HeioiDPsWq0dft6Y.jpg" /><figcaption>Credit: Mercedes Benz</figcaption></figure><p><strong>Voice Commands &amp; Gestures</strong></p><p>Gestures and voice commands are powerful ways to interact with smart cars. They reduce hand-eye coordination efforts and the need for haptic feedback while using touchscreen interfaces or other controls of the car. With devices such as Google Home and Amazon Alexa in the market and assistants like Siri and Google Assistant on almost every smartphone out there, consumers are getting more comfortable with using voice commands as time passes. Gestures are like second nature to humans and it is a great idea to leverage this human trait for interaction with smart cars (Medium, <a href="https://medium.com/@eytand/in-car-ux-7c5474d2b638">Reimagining In-Car UX</a>, Sep 2014).</p><p>For instance, BMW allows drivers to use gestures to accept in-car phone calls and control the music volume. Even several voice commands exist to perform vital tasks that drivers are highly likely to perform, such as navigation (Medium, <a href="https://medium.com/@pvermaer/5-car-ux-trends-43e3ca2471ca">5 Car UX Design Trends</a>, Aug 2015) (Carnation Software, <a href="http://www.carnationsoftware.com/carnation/images/BMW-03_00-Voice-Control-en.pdf">BMW Owner’s Manual</a>).</p><p>UX Designers incorporate these interaction methods to reduce cognitive load on the drivers and allowing a more natural and usable way to perform actions and increase driver confidence. Maybe one day we can finally say goodbye to blindly and clumsily hunting for knobs in some corner of the car.</p><h3>Conclusion</h3><p>Many significant advances in the merged automotive and digital fields have been made and will continue well into the foreseeable future. Consumers have also become more immersed in the digital landscape, and their needs and demands are advancing exponentially. With increasing competition and easier access to advanced technology, it is essential for manufacturers to ensure top-notch in-car UX to maintain their position in the market. We might still be a long way from a time when every vehicle would be self-driven with the most advanced and intelligent systems on board, but it is quite intriguing to imagine new ways humans would interact with these marvels and how UX professionals would carve the in-car experience.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=78910b0ccbc9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Design Critique — IMDb’s Android App]]></title>
            <link>https://medium.com/@sanchit_kumar/design-critique-imdbs-android-app-92b8172d9027?source=rss-49f9005ee6d5------2</link>
            <guid isPermaLink="false">https://medium.com/p/92b8172d9027</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[critique]]></category>
            <category><![CDATA[imdb]]></category>
            <category><![CDATA[usability]]></category>
            <dc:creator><![CDATA[Sanchit Kumar]]></dc:creator>
            <pubDate>Mon, 29 Oct 2018 14:27:33 GMT</pubDate>
            <atom:updated>2018-10-29T14:27:33.274Z</atom:updated>
            <content:encoded><![CDATA[<h3>Design Critique — IMDb’s Android App</h3><p>How usable is IMDb’s Android app?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*5A0BXl0_GcRshdZ4.png" /></figure><h3>Introduction</h3><p><strong>IMDb</strong> or <strong>Internet Movie Database</strong> is a popular online database of information related to movies, TV shows, video games, awards, and events. The database comprises of information on production crew and cast, summaries, synopses, reviews and ratings, which millions of registered users consume daily. As per <a href="https://play.google.com/store/apps/details?id=com.imdb.mobile">Google Play Store</a>, there are over 100 million users of the app on the Android platform!</p><h3>Android App by IMDb: Well-designed?</h3><p>The IMDb Android app’s design language has evolved slowly and has recently adopted <a href="https://material.io">Material Design</a>. While some design decisions made across the app are backed by strong logic and can be applauded, few flaws can also be uncovered in the design. Here are some of the <strong>key findings</strong> -</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/24/0*e4LyDtKtEKAjddBV.png" /></figure><p><strong>Failed to scroll on the home screen:</strong> Encountered a gulf of execution</p><p>Upon launching the app, a scrollable newsfeed-like interface is presented. With a <strong>goal</strong> — ‘find a good movie’ in mind, there is an immediate <strong>intent</strong> to explore the landing screen. The mind creates a <strong>plan</strong> to scroll for exploration. However, due to an obtrusive advert that blocks the lower ~10% of the screen (which happens to be the most usable area of the display), scrolling fails, as the swipe is registered on the advert’s overlay element, rather than on the app’s interface <strong>(Fig. A)</strong>. This creates a <strong>gulf of execution </strong>as the intended action could not be executed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/281/0*_af6V5f_ZBUnhflp.gif" /><figcaption>Fig. A: Ad overlay blocks interaction</figcaption></figure><blockquote><strong><em>Recommendation:</em></strong><em> Embedding the advert in the app’s main interface, rather than having it as a static overlay, will eliminate the obstruction and bridge the </em><strong><em>gulf of execution</em></strong><em>, thereby improving the design </em><strong><em>(Fig. B)</em></strong><em>. An alternate solution would be to substantially reduce the height of the advert to decrease the probability of the user’s thumb landing on it.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*eHGHKG9KaTaOgLde.png" /><figcaption>Fig. B: Ad embedded in the app’s interface no longer blocks the interaction</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/24/0*i1M3Ua5eZnvxdWuG.png" /></figure><p><strong>Swiping between tabs is disabled:</strong> A logical constraint</p><p>With increasing importance being given to design, several Android apps have greatly improved their overall quality and conformance to the <a href="https://material.io/guidelines/">Material Design guidelines</a>. The ‘easy to understand’ navigation of the IMDb app makes it an excellent example of this. Labels such as ‘Home’, ‘Movies’, ‘TV’ etc. are straightforward and map very well with users’ potential goals.</p><p><a href="https://material.io/guidelines/components/tabs.html">Navigation tabs</a> are seen in several popular apps such as Messenger, Twitter etc. and users are well versed with the interaction patterns associated with them (For instance, switching between tabs can be achieved by swiping left or right in the tab container) However, when it comes to the IMDb app, tabs cannot be swiped across <strong>(Fig. C)</strong>. But why though?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/281/0*3Jmcp18VAeJW5BgS.gif" /><figcaption>Fig. C: Tab swipe is blocked to eliminate error due to horizontal scrolling lists</figcaption></figure><p>It is important to note that there are several horizontal-scroll lists in the app (Eg. ‘Most popular movies’ in <strong>Fig. C</strong>) that encourage exploration by swiping left or right to reveal more list items. Having the same gesture for swiping between tabs can cause a conflict and make the interface error prone (users might want to scroll through the list, but might end up switching tabs or vice-versa). The designers of the app have, therefore, intelligently made use of Donald Norman’s principle of <strong>constraints</strong> by disabling ‘swipe to switch tabs’, so that users do not fall prey to making these errors.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/24/0*6MDpN-3PG7fmihU2.png" /></figure><p><strong>Unable to find articles about awards or events:</strong> Poor discoverability</p><p>As stated by Donald Norman — “<em>Two of the most important characteristics of good design are discoverability and understanding</em>”, it is very important to ensure that features or functions that the user desires are <strong>discoverable</strong>. The IMDb app, while mostly working well in this regard, does seem to fail in a particular scenario. With the <strong>goal</strong> of ‘reading articles about awards or events’ in mind, tapping on the ‘Awards &amp; Events’ tab seems logical, but the landing screen only displays videos at the top with no indication of any other information. On revisit, however, articles suddenly appear below the video-container <strong>(Fig. D)</strong>. So, what went wrong?</p><p>There was lack of <strong>feedback</strong> to indicate that the app is loading up list items below the video. There were also no <strong>signifiers </strong>present, to indicate that there is a list of articles present (maybe in the form of a heading label or a container). These factors lead to <strong>poor discoverability</strong> of the feature, resulting in failure to achieve the goal of finding articles about events or awards.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/281/0*-SqqpCsIjqB9SIxo.gif" /><figcaption>Fig. D: Poor discoverability of articles about Awards and Events</figcaption></figure><blockquote><strong><em>Recommendation:</em></strong><em> Improving </em><strong><em>discoverability</em></strong><em> can be achieved by fixing the root causes of the problem: i.e. improving </em><strong><em>feedback</em></strong><em>, and providing strong </em><strong><em>signifiers</em></strong><em>. A loading / in-progress spinner can be added for feedback, and a container along with a heading label (such as ‘Articles’) can be included below the video-container to indicate the upcoming content. </em><strong><em>(Fig. E)</em></strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*TzFq1zU7hFuv8_3W.png" /><figcaption>Fig. E: Improved discoverability by adding signifiers (title ‘Articles’ or displaying a container) and feedback (loading spinner)</figcaption></figure><h3>Conclusion</h3><p>In light of the above, it is clear that even though IMDb is the world’s most popular database related to movies, TV shows etc., and their Android app is quite well-designed, robust and effective, a few design flaws still plague the interface. Making improvements to these potential flaws is sure to greatly enhance the experience for users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*B2NSkB0RI1bMqXC6.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=92b8172d9027" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Experience with Tableau — Visualizing Defacement Crime in New York City (2014 to 2018)]]></title>
            <link>https://medium.com/@sanchit_kumar/my-experience-with-tableau-visualizing-defacement-crime-in-new-york-city-2014-to-2018-59ebda3faedf?source=rss-49f9005ee6d5------2</link>
            <guid isPermaLink="false">https://medium.com/p/59ebda3faedf</guid>
            <category><![CDATA[tableau]]></category>
            <category><![CDATA[new-york-city]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[information-visualization]]></category>
            <dc:creator><![CDATA[Sanchit Kumar]]></dc:creator>
            <pubDate>Mon, 29 Oct 2018 14:19:59 GMT</pubDate>
            <atom:updated>2018-10-30T19:38:25.472Z</atom:updated>
            <content:encoded><![CDATA[<h3>My Experience with Tableau — Visualizing Defacement Crime in New York City (2014 to 2018)</h3><p>Using Tableau, a popular Data Visualization tool, to analyze defacement crime in NYC through information visualization techniques</p><h3>Introduction</h3><h4>The Idea</h4><p>When I first came to New York from India in 2017, I noticed several graffitied buildings throughout the city. I had never seen anything quite like that before, and I had several questions racing through my mind — “Who would do this?”, “Why would they want to deface someone’s property?”, “How does the police deal with this situation?”.</p><p>A year later, while working on searching for datasets, I came across ‘Encroachments and Defacements’ dataset hosted on ‘NYC OpenData’. I was immediately interested in visualizing various aspects of defacement crimes in New York City over time and hence selected this dataset.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rjHIgGA8L0Dw89ix.jpg" /><figcaption><em>Source: </em><a href="https://unsplash.com/photos/HxwdZ7vLwaY[/caption]"><em>https://unsplash.com/photos/HxwdZ7vLwaY</em></a></figcaption></figure><h4>The Inspiration</h4><p>Since my dataset was that of crime reports, I began searching for crime data visualization examples. I found a great <a href="https://www.nytimes.com/interactive/projects/crime/homicides/map"><strong>New York Times’ article</strong></a> (works only on Internet Explorer), which displayed visualizations of homicides in New York City between the years 2003 and 2011. A great feature for users was that the visualization displayed the crime data in the form of a timeline, where the year could be selected to see the crimes committed in that timeframe. Users could also see the data for all years at once, which was great. Another thing I really found interesting about this visualization was the fact that I could see the crime for each of New York City’s boroughs. They had used different shades of blue to represent the various boroughs, but one thing I didn’t like was the fact that the shades of blue were too similar to easily differentiate between them. I had to look very carefully to make out the difference.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*966swX3PY8KE875M.png" /><figcaption><em>Source: </em><a href="https://www.nytimes.com/interactive/projects/crime/homicides/map[/caption]"><em>https://www.nytimes.com/interactive/projects/crime/homicides/map</em></a></figcaption></figure><p>Another example that drove my design decisions was an <a href="https://maps.nyc.gov/crime/"><strong>NYC.gov</strong></a> website that displayed various crime maps. It had many filters based on ‘Crime Type’ and ‘Date Range’ which was great for users that might want to isolate the data. The markers on the map varied in size based on the number of crimes in a neighborhood, and they helped effectively describe the count visually for users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*kRqK0K-_Ya_1kOR_.png" /><figcaption><em>Source: </em><a href="https://maps.nyc.gov/crime/[/caption]"><em>https://maps.nyc.gov/crime/</em></a></figcaption></figure><p>Finally, I also found a series of visualizations very interesting from <a href="https://www.statista.com/statistics/191219/reported-violent-crime-rate-in-the-usa-since-1990/"><strong>Statista.com</strong></a>, which had made use of bar and line charts to depict the timeline of various kinds of crimes in the US. I thought that it was an effective approach for highlighting the rise or fall of crimes from year to year for users. This would also enable users to compare data side by side quickly and easily.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*nCLBMt5X7eyDSvu_.png" /><figcaption><em>Source: </em><a href="https://www.statista.com/statistics/191219/reported-violent-crime-rate-in-the-usa-since-1990/[/caption]"><em>https://www.statista.com/statistics/191219/reported-violent-crime-rate-in-the-usa-since-1990/</em></a></figcaption></figure><h3>Materials</h3><h4>Tools –</h4><p><a href="https://products.office.com/en-us/excel"><strong>Microsoft Excel</strong></a> — A spreadsheet software by Microsoft, part of the MS Office suite.</p><p><a href="http://openrefine.org/"><strong>OpenRefine</strong></a> — A tool for working with messy data, which helps in cleaning it up quickly and effectively.</p><p><a href="https://public.tableau.com/en-us/s/"><strong>Tableau Public</strong></a> — A free to use data visualization software, which has many visualization techniques built into it.</p><h4>Dataset –</h4><p>The dataset I used for my lab is ‘<a href="https://data.cityofnewyork.us/Transportation/Encroachments-and-Defacements/kyvb-rbwd"><strong>Encroachments and Defacements</strong></a>’, which I found while browsing through datasets on <a href="https://opendata.cityofnewyork.us/"><strong>NYC OpenData</strong></a>.</p><h3>Process</h3><h4>Cleaning up the data</h4><p>After I had selected the dataset, I started off with cleaning up the data to make it usable and ready for analysis. I used the OpenRefine tool to upload the CSV dataset (obtained from NYC OpenData) and made improvements to its raw form. Many of the dates were in different formats and hence I had to bring uniformity to it. Even the datatype needed to be transformed from ‘text’ to ‘date’. I also had to add a facet for getting the data only for ‘Defacement’ crimes, as the set also included data for ‘Abandonment’, ‘Encroachment’ and ‘ATM‘, which I didn’t include in the scope of this project. I also transformed many of the number cells’ datatype from ‘text’ to ‘number’.</p><h4>Transferring the data into Tableau</h4><p>After the data was clean and ready for analysis, I exported the data from OpenRefine as an Excel file for importing into Tableau. However, I encountered an error in the OpenRefine console and, therefore, had to export as a CSV instead of Excel. The workaround was to open the refined CSV in Microsoft Excel and then use its ‘export’ feature to convert it to Excel format. I was, hence, finally able to import the data into Tableau.</p><h4>Analyzing the data + Creating Visualizations</h4><p>After loading the data into Tableau, I first started off by trying to create a visualization for the number of reported defacement crimes per year, for each borough. I chose the line chart visualization since it is great for showcasing time series data and immediately, the difference in the number of reported crimes between boroughs was evident. However, I noticed that the line chart had values for the year 2018 as well (since the data for 2018 was present in the dataset). But since 2018 is the current year and still in progress, the data for this year wasn’t complete. It, hence, did not make sense to include this year, and I added an exclusion for it.</p><p>It was also important to understand how much time the police took to resolve these reported issues. To understand this for each borough and each year, I had to create a calculated field that would be the difference between the initial crime report date and the closing date columns. This difference would give me the resolution time as it would give the number of days between when the crime was reported and when the case was closed. After creating this calculated field, I added the average function on these values for each year per borough and put it against year on the x-axis to form a line chart. Yet again, I selected the line chart as it is very effective for visualizing and comparing time series data. As opposed to the previous visualization, I did not exclude the year 2018 in this case, as an average value will not vary significantly for the rest of the year.</p><p>Finally, I was most excited about the geo-data that was available in the dataset, and I proceeded with creating a bubble map visualization of the defacement crimes. I used the latitudes and longitude coordinates to plot the points on the map for each crime report. I color-coded the points using an accessible color palette on the map for each borough, to make it easy to understand the difference in their borders for all users. Another great aspect was that the size of the points was bigger for more crime reports in the same neighborhood over the years, which could help users understand the magnitude of crimes in a region. I added filters for the year so that users could isolate the data for each year as well.</p><h3>Results</h3><h4>Please access my tableau project — <a href="https://public.tableau.com/views/DefacementCrimeinNYC/DefacementCrimeinNYC2014-2018?:embed=y&amp;:display_count=yes&amp;publish=yes">Defacement Crime in NYC (Year 2014 to 2018)</a></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6h0DfxwldDQmx4kc.png" /><figcaption><em>Dashboard — Defacement Crime in New York City</em></figcaption></figure><p>From this visualization, I hope to make viewers aware of how defacement crime has changed over the years, and how efficiently the police is taking care of these problems. I also hope to make users understand the difference in the crime rate and resolution time upon moving from borough to borough.</p><h4>Visualization 1: Number of Defacement Crimes Reported in NYC</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*qPOIgDkiD-eB1k6K.png" /><figcaption><em>Number of Defacement Crimes Reported in NYC (2014 to 2017)</em></figcaption></figure><p>It is interesting to see how the Bronx and Brooklyn have steadily increasing defacement crime rates since 2014. Best effort observed is in the Bronx, the average resolution time of which, dropped to less than half of what was observed in 2014.</p><p>The highest rate is observed in Queens in 2016, which is significantly higher than all other boroughs in that year, but it steeply drops below that of Brooklyn in 2017.</p><p>Manhattan also peaks above the Bronx in 2016 but is still on the lower side overall. Manhattan also sees a trend similar to Queens, with the peak in 2016 and a steep drop in 2017.</p><p>Brooklyn emerges as the top defacement target in 2017, closely followed by the Queens. Next in line in 2017 are Bronx, Manhattan and Staten Island. Staten Island has the lowest defacement rate and has maintained it quite steadily between 2013 and 2017.</p><h4>Visualization 2: Average Resolution Time for Defacement Crimes in NYC</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*C517Iv3WtI7e4MW0.png" /><figcaption><em>Average Resolution Time for Defacement Crimes in NYC (2014 to 2018)</em></figcaption></figure><p>The police have been making a solid effort in lowering the resolution time for defacement crimes in New York City.</p><p>The Bronx and Brooklyn saw their best year in terms of resolution time in 2016, while Manhattan and Staten Island are currently (2018) in their best year.</p><p>Brooklyn has been seeing regular ups and downs in its average resolution time between 2013 and 2018, however, currently, it is extremely close to its best year in 2016.</p><p>In 2018, Brooklyn, Queens and the Bronx even out on their resolution times, with Manhattan better off by around 10 days, and Staten Island, emerging the best by around 10 days lower than Manhattan.</p><h4>Visualizations 3: Defacement Crime Map of NYC</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*u8G_SO6YARoTubev.png" /><figcaption><em>Defacement Crime Map of NYC (2014 to 2018)</em></figcaption></figure><p>It is very interesting to see how the crimes are spread across the city and its boroughs. Certain neighborhoods that are worst affected over the years can easily be identified by looking at the size of the marker. For instance, West, 15th Street in Brooklyn is the worst affected overall with 3360 defacement reports over the years. 91 Avenue in Queens is next in the worst affected neighborhoods with 1350 crime reports, which is still significantly lower than Brooklyn.</p><p>Close to JFK Airport, streets 113 through 130 in Queens seem quite badly affected with several markers spread across.</p><p>Manhattan’s Upper East Side seems to be its worst affected part with a dense scatter of defacement crime markers.</p><p>The Bronx and Brooklyn are overall quite evenly affected throughout. Staten Island emerges as the cleanest borough with scarcely scattered markers.</p><h3>Reflection &amp; Future Directions</h3><p>Learning Tableau has been a great journey so far, and I was amazed at the ability of the tool to create beautiful visualizations out of raw data. I also loved the fact that there was a color-blind friendly palette to make visualizations more accessible.</p><p>I also learned that data can be messy, and the cleanup process is not always a one-time activity. A need to further clean up the data can crop up any time during the analysis and visualization phases as well. For instance, I noticed bad data for the year 2013 during the visualization phase. While it was easy to exclude it from the visualization in the line charts, it could not be done in case of the map visualization. I had opted to show the year filter for users and Tableau did not allow me to hide the year ‘2013’ from the list, even though it let me exclude the data from the visualization. This was making it confusing, as there was no data for 2013 to be visualized, but the filter still showed the year 2013. I, therefore, had to clean up the excel sheet (remove bad data for 2013) and refresh the dataset into Tableau.</p><p>While searching for crime data on the NYC OpenData website, I found several datasets for various other crimes, and I feel it would be great to explore those for any future projects. I would also like to try and look for other sources that help me find data for the entire United States of America, and it would be extremely interesting to see how the other states are doing compared to New York.</p><h3>References</h3><ul><li>Few, “Effectively Communicating Numbers: Selecting the Best Means and Manner of Display”</li><li><a href="https://www.data-to-viz.com/graph/line.html"><strong>From Data to Viz — Line Charts</strong></a></li><li><a href="https://www.data-to-viz.com/graph/bubblemap.html"><strong>From Data to Viz — Bubble Map</strong></a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=59ebda3faedf" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>