<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Dominikus Baur on Medium]]></title>
        <description><![CDATA[Stories by Dominikus Baur on Medium]]></description>
        <link>https://medium.com/@dominikus?source=rss-5faacc2a4dd3------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 17:12:21 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@dominikus/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[2Q17: How we built a dataviz of Google search interest in the German election]]></title>
            <link>https://medium.com/@dominikus/2q17-how-we-built-a-dataviz-of-google-search-interest-in-the-german-election-27475839566b?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/27475839566b</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Tue, 07 Nov 2017 14:51:00 GMT</pubDate>
            <atom:updated>2017-11-07T14:51:00.722Z</atom:updated>
            <content:encoded><![CDATA[<p>Wahl 2Q17 was a joint effort by data visualization freelancers <a href="https://medium.com/u/f50f8c4bbcbd">Moritz Stefaner</a>, <a href="https://medium.com/u/5faacc2a4dd3">Dominikus Baur</a> and <a href="https://medium.com/u/b63f2804be4c">Christian Laesser</a>, with the Google News Lab (<a href="https://medium.com/u/f0bf4c0acabf">Isa Sonnenfeld</a>, Jörg Pfeiffer and <a href="https://medium.com/u/e093dae6814e">Simon Rogers</a>) and project advisor <a href="https://medium.com/u/11fdafb23a7e">Alberto Cairo</a>.</p><blockquote><em>Our goal was to visualize the Google search interest around the German elections end September 2017.</em></blockquote><p>Over the course of the project, we launched a lot of different smaller and bigger visualizations, from daily to yearly views of the top searched terms for the candidates on our project site <a href="http://2q17.de"><strong>2Q17.de</strong></a> over embedded widgets<strong> </strong>on external sites to <a href="http://www.2q17.de/tv-debate-fuenfkampf.html"><strong>special interfaces for live events and debates</strong></a> and images and movies for social media.</p><p>→ Find a good overview of all the products we produced <a href="http://truth-and-beauty.net/projects/wahl-2q17">here</a>.</p><p>After previously discussing our <a href="https://medium.com/@laesser/behind-the-scenes-how-we-came-up-with-our-visualizations-of-google-search-interest-around-the-a864c3add0e9">design process in detail</a>, we now want to talk about what you can’t see in the finished project: all the various decisions that went into the actual implementation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*1OSl6y7pqz5Va-r5.png" /></figure><p>2Q17’s main site is based on a relatively simple feed-forward mechanism: we get data from Google Trends, turn them into our own format and store it in the backend, from which the frontend can grab and display them.</p><p>The frontend itself is a static React + mobx-based website built with webpack.</p><p>Yet, the devil is — as always — in the details. So let’s dive right in!</p><h3>Backend</h3><p>The bigger the project, the simpler you want to keep individual aspects. We’ve tried to stick to this premise when it came to the backend:</p><p>Starting out, we got daily snapshots of the data in its raw form directly from Google Trends (and sorry, there’s no public API available for obvious reasons). Since this data still needed some treatment (see “Text data needs gardening” section in the <a href="https://medium.com/@moritz_stefaner/d%C3%B6ner-charts-eye-roll-gifs-and-word-clouds-some-things-we-learned-visualizing-google-trends-2317ab8f2c1e">previous Medium post</a>), we had a dedicated Linux virtual machine set up on the <a href="https://cloud.google.com/compute/">Google Compute Engine</a> that ran various python scripts daily to get the data into app-compatible form. The resulting files would land on <a href="https://cloud.google.com/storage/">Google Cloud Storage </a>and be directly accessed from the frontend which itself was served from there.</p><p>One of the nice surprises was how well Google Cloud Storage performed as a super-simple webhost. By <a href="https://cloud.google.com/storage/docs/hosting-static-website">tweaking the DNS settings right</a> you can use it to very easily host a static website. And the performance is great: even once tens of thousands of people hit the site after the <a href="http://www.spiegel.de/politik/deutschland/bundestagswahl-google-auswertung-zum-interesse-an-parteien-und-kandidaten-a-1169285.html">Spiegel Online timeline article</a> went live, there was no noticeable slow down. We ended up hosting everything there, from the data to the site itself and the embeds.</p><p>Again in the spirit of keeping things simple we early-on decided against using a database. While databases are useful for manipulating data or retrieving very specific aspects of a data set, we had neither of these things: our backend scripts would create TSVs and the frontend would read them whole and display them.</p><p>Even when the complexity of the data grew — from daily snapshots, to weekly and monthly versions, and finally to four hours-old “real-time” data sets — we were still fine.</p><p>There’s two types of data files we’re working with: the “latest” versions, which shows the latest daily- or 4h-data, and the “archive” versions, which contain data for every day, week and month in 2017 (up until the election).</p><p>Everything is stored as files on Cloud Storage and can be retrieved with a simple fetch operation. While the site loads the latest data automatically, changing the timeframe from days to something else triggers these on-demand requests. Similarly, fixed embeds with a specific date also load one of the archive data. Fortunately, these additional requests are pretty fast, so that changing weeks or months only results in a very subtle delay.</p><h3>Code support during the design process</h3><p>When it comes to working with data, we’re always facing the (potential) break between ideas and data. Once you put the actual data into your designs, lofty visions might shrivel quickly. That’s also what makes datavis design closer to designing games than static graphics: it’s about creating rules and seeing what pictures the data draws with them. And in the end, the data always wins.</p><p>That is also why code support and quick prototyping is critical during the initial designs — to create shorter feedback loops between ideas and the resulting data-driven graphics. But code can also work as a playground where it becomes easier and faster to explore new ideas.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/540/0*8s-zpaBApxwYqSPP." /></figure><p>Working with a much-maligned type of chart definitely increases the motivation to getting it right. That’s why we set up a dedicated prototyping environment just to play around with various types of word clouds and their animations. While word clouds can be usually found as static graphics (thanks Wordle!), they’re arguably much more expressive in an animated form. In our case, the frequent switching between sets of words (e.g., from one candidate to the next or one day to the other) just had to be supported with suitable animations. But that opened up various questions:</p><p>Where do the search terms come from? Where do they go? How can we differentiate between search terms that stay on the stage and the new or leaving ones?</p><p>Thanks to our custom prototyping environment that let us play around with different layouts and animations in short pieces of code we could relatively quickly explore different ideas and visual metaphors:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F234825063&amp;dntp=1&amp;url=https%3A%2F%2Fvimeo.com%2F234825063&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F656493223_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/534d8706e1974c66cdc5db3a7e06090c/href">https://medium.com/media/534d8706e1974c66cdc5db3a7e06090c/href</a></iframe><p>This environment took care of layout-independent tasks (mainly transitioning between terms and rendering them) and let us quickly churn out variations manipulating every aspect of the DOM we could think of.</p><p>After implementing various ideas, we settled on an internal rule set for those transitions. The final word cloud show the taken decisions and how those transitions can support the visualization:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F235001679&amp;dntp=1&amp;url=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F235001679&amp;image=http%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F656717802_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/e5038742b06f8e5668119b1940617410/href">https://medium.com/media/e5038742b06f8e5668119b1940617410/href</a></iframe><h3>2Q17.de</h3><p>Similarly to the last few projects we did, 2Q17’s frontend is built on a combination of <a href="https://reactjs.org/">React</a> + <a href="https://mobx.js.org/">mobx</a>.</p><p>One common problem for complex web applications is how to manage the internal state. Javascript is flexible enough to easily create spaghetti code of the highest order (it definitely feels easier to produce than the properly organized variety).</p><p>So, using some state management library can help in constraining the code from delving into pasta-territory. While mobx is less known than its ideological siblings <a href="http://redux.js.org">Redux </a>und <a href="https://facebook.github.io/flux/">Flux</a>, it has similar ideas: there’s a centralized state that controls how the application looks. Interaction, new data, etc. change the state which is reflected in a re-rendered application.</p><p>Mobx is a lot less dogmatic when it comes to the actual implementation, though, (that’s what <a href="https://github.com/mobxjs/mobx-state-tree">mobx-state-tree</a> is for) which it gives it a nice, low-impact feel. It basically provides you with ways to create observable values, automatically computes other values based on them and re-renders React components when they’re affected by value changes. It also works great with React, which anyway favors a “<a href="https://www.jstwister.com/post/react-stateless-functional-components-best-practices/">pure function</a>” approach, where components themselves should have as little state as possible and be just dumb containers for feeding from a global state.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7bWLRv0B1DtSzR3CljMqZA.png" /></figure><p>In the 2Q17 case, our overall architecture is based on two main mobx “pipelines”: one for DATA, one for STATE. The data pipeline keeps track of data loading and processing (deriving values from the TSV files), while the state manages changes through interaction: selecting a different day or candidate or clicking on one of the pulse bubbles.</p><p>Both pipelines are merged in the dataAPI class, which creates props for the React components based on info from both. For example, dataAPI’s “wordcloudTerms” function filters the wordcloud data from the dataStore according to the currently selected candidate and timeframe in uiState and spits out an easily digestible array as prop for the React WordcloudComponent. If the values in uiState change due to clicks or taps, the wordcloudTerms function is re-evaluated and the WordcloudComponent is re-rendered with an updated array.</p><p>Mobx automatically keeps track of all variables that a given React component relies on and through this dark magic, all changes in either of the pipelines automatically re-render affected components. So, everything a component sees from a state change is that their respective props have changed. Sticking with the WordcloudComponent as example, it is initially rendered with just an empty array for its “terms” prop which causes it to stay blank. Once the data pipeline is done loading the relevant TSV-files and the dataAPI with processing them into a WordCloud-compatible format, mobx triggers a re-render on the component with the new — now filled — “terms” prop. This results in the new terms floating into view. Further changes to the terms (e.g., by switching candidates), trigger more re-renders with new data and so on.</p><p>This automated re-rendering really is the most convenient aspect of mobx, since developers no longer have to keep track of which components are affected by a certain state change. We even went so far as to make as many components as possible stateless (<a href="https://www.jstwister.com/post/react-stateless-functional-components-best-practices/">easier to debug, easier to re-use, </a>…), to take advantage of the centralized mobx state.</p><h3>Performance</h3><p>Facebook’s React framework is great for building complex web applications since it encourages modularity (Components) and enables a very convenient fire-and-forget rendering approach with its virtual DOM.</p><p>In reality though, there are still some pitfalls when it comes to performance. While building a simple, non-animated website with React is very easy, having something that performs well even with hundreds of animated elements floating around is decidedly harder.</p><p>Without going too much into details: since React is relatively naive when it comes to re-rendering components, <a href="https://reactjs.org/docs/react-component.html#shouldcomponentupdate">shouldComponentUpdate</a> is a must to tell it explicitly when a re-render is necessary. By defining this function in a component, you as a developer can decide when the component should re-render. This can be crucial when hundreds of components would be re-rendered each frame even though nothing (visible) has actually changed.</p><p>Along those lines, minimizing the render functions by splitting everything up into tinier and tinier components might seem petty, but does wonders for the performance (especially when being very decisive in re-rendering them). When you have a look at our word clouds, even single terms are actually two nested React components, to separate the position- and scaling-animations (which are cheap CSS-transforms) from the more costly color- and font-animations:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SfWmgqV7qZ-I6Abv.png" /><figcaption>Single tags are composed of two React Containers: “Term” manages costly updates like opacity and color, while “TermContainer” acts as a hyperlink to the related Google query and performs the fast and frequent geometry updates.</figcaption></figure><p>While working on performance, you sometimes also learn something new about tried-and-true approaches:</p><p>Everything usually gets faster when work can be split into separate threads. Since this is only possible through <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers">Web Workers</a> in Javascript, at some point I created a version of the word cloud that performed the <a href="https://github.com/d3/d3-force">d3-force</a> calculations in a web worker. But unfortunately, moving the data for hundreds of nodes between the background and foreground threads once a frame actually caused more overhead than performance gain (despite <a href="https://nolanlawson.com/2016/02/29/high-performance-web-worker-messages/">JSON.stringifying everything</a>). So I ended up dumping this approach, with hopes for <a href="https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer">SharedArrayBuffers</a> in future browsers.</p><p>Talking about tried-and-true approaches: one common thing I like to do when trying to make things fast is switching from the DOM/SVG to either Canvas or even WebGL. With browsers becoming better and faster, however, this year it actually made sense to stick with the DOM:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/376/1*04ppBZz6HpD5j38e1vXXsg.gif" /><figcaption>Mobile word cloud animation (slowed down by 50%)</figcaption></figure><p>In the mobile word cloud, tags are entering from the left and exiting on the right side of the screen. And if you look closely, you can see that the tags are initially even outside of their container and twisted a bit, thanks to some rotateY- and perspective-CSS magic. These effects would be very hard to recreate with Canvas or WebGL, so we decided against dropping the DOM.</p><p>What helped us with the performance in this regard was that word clouds become cluttered and hard to read with less space. That’s why the number of terms (and thus the number of costly to render elements) drops with screen width. You can try it yourself (and feel like a web developer) by resizing the window — the smaller the window gets, the more terms leave the stage and the less new ones get introduced.</p><h3>Wrapping up</h3><p>If somebody would have shown us the eventual extent of the 2Q17 project when starting out — who knows if we would have actually done it! Dozens of different data visualizations working across phones, tablets and desktops, varying timeframes with close to real-time data, and half a year of sometimes quite intense efforts led to a project that — looking back — became quite grand in its ambition.</p><p>Code was a helpful and sometimes frustrating companion along the way. By keeping things as simple as possible (skipping a complex backend, relying on React + mobx for a straightforward frontend architecture), we could postpone some of the inevitable complexity towards the end. Similarly, focusing optimization efforts only on parts of the code that were never going to change or never being re-used, eased the workload and made us throw away less code.</p><p>And so in the end, an abundance of discussions and experiments on both the <a href="https://medium.com/@laesser/behind-the-scenes-how-we-came-up-with-our-visualizations-of-google-search-interest-around-the-a864c3add0e9">design</a>- and development-sides let us explore our very varied data source in detail and learn almost more than we ever wanted to know about Germany’s interest in their political candidates.</p><p>This article is a joint production by <a href="https://medium.com/u/5faacc2a4dd3">Dominikus Baur</a>, <a href="https://medium.com/u/b63f2804be4c">Christian Laesser</a> and <a href="https://medium.com/u/f50f8c4bbcbd">Moritz Stefaner</a>.</p><p><em>If you enjoyed this write-up, make sure to check out also:</em></p><ul><li><em>Christian’s </em><a href="https://medium.com/@laesser/behind-the-scenes-how-we-came-up-with-our-visualizations-of-google-search-interest-around-the-a864c3add0e9"><em>Behind the scenes article</em></a></li><li><em>Moritz’s </em><a href="https://medium.com/@moritz_stefaner/döner-charts-eye-roll-gifs-and-word-clouds-some-things-we-learned-visualizing-google-trends-2317ab8f2c1e"><em>reflections on lessons learned</em></a><em> and the portfolio page at </em><a href="http://truth-and-beauty.net/projects/wahl-2q17"><em>http://truth-and-beauty.net/projects/wahl-2q17</em></a></li><li><a href="http://2Q17.de"><em>http://2Q17.de</em></a><em> for an archived version of the site.</em></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27475839566b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Big Data and the end of everyday cheating]]></title>
            <link>https://medium.com/@dominikus/big-data-and-the-end-of-everyday-cheating-124bfb2a5553?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/124bfb2a5553</guid>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[society]]></category>
            <category><![CDATA[future]]></category>
            <category><![CDATA[law]]></category>
            <category><![CDATA[big-data]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Tue, 29 Aug 2017 19:53:28 GMT</pubDate>
            <atom:updated>2017-08-29T19:54:55.514Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WYfAIULglJBLE0a6DgWU4g.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/photos/AU07BMLW1NA?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Adrian Williams</a> on <a href="https://unsplash.com/?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>I’m currently thinking (and reading) a lot about the implications of a data-driven society. Data collection is taking a foothold in our daily lives, through social media but even more so through the internet of things. All of a sudden we’re split into two beings: our good old physical selves and outsourced digital versions in the cloud.</p><p>Pundits like to emphasize the benefits of big data and algorithms to all of us (I assume with economic motivations). We’ll get the <a href="http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory">end of theory</a> and become <a href="https://www.themuse.com/advice/what-you-should-know-about-big-data-and-parenting-the-good-the-bad-and-the-ugly">better parents</a>, <a href="https://www.spontacts.com/">friends</a> and <a href="http://www.businessnewsdaily.com/7099-big-data-employee-engagement.html">employees</a>. These assurances usually leave the rest of us with a feeling of dread. The big data specter hangs above us all and we’re in fear of old drinking photos showing up in job talks or <a href="http://www.nytimes.com/2015/02/15/magazine/how-one-stupid-tweet-ruined-justine-saccos-life.html?_r=0">having our lives ruined by one stupid tweet</a>. Also, we’re facing a new data apartheid, with powerful data lords on the one side and us sheepish content generators on the other.</p><p>But what’s interesting for our daily lives is that big data probably means the end of everyday cheating.</p><p>In the data-less days of yore we’ve learned to get away with various little transgressions: crossing a red traffic light at night, lying to your boss about the tedious report or secretly having a smoke hidden from your spouse and doctors. All small things without consequences. Till now.</p><p>Since everything will be sensorized, it will get substantially harder to create a favorable virtual identity. Various biosensors collect how little we actually move, location data shows how much time we spend at our favorite bar and our financial histories make it already hard to get a credit after wrongdoings. And with smart everything (cities, cars, fridges) nothing will be secret any more. Never-tiring algorithms will catch every outlier in the data.</p><p>All of a sudden everyday cheating will be futile. The city captured your car’s ID when crossing that red light, your boss knows that you’d spent three hours on Facebook instead of doing the report and your wearable air quality sensor rats you out to your spouse and doctors.</p><p>On the plus side, these micro big brothers could also mean an end to privilege. All charms and good looks won’t help you land a job when the frightfully neutral algorithm deems you unworthy. Talking yourself out of a parking ticket becomes impossible with a machine. Knowing the right folks becomes useless when the data’s against you.</p><p>Along the same lines I predict a rise in data tweaking: companies that specialize in adjusting your data to make it more positive by hiding your wrongdoings and creating fictional noise to fool the algorithms. It’s the crypto wars of hackers against security companies all over again just this time about data integrity.</p><p>But the end of everyday cheating has a much more important implication for our society’s integrity: we have to reevaluate laws and rules that at the moment only “work” because people can break them without consequence. Maybe that traffic light at that crossing really isn’t necessary, maybe sleeping in means more productivity, maybe New Yorkers are actually quite capable of jaywalking without killing themselves.</p><p>In the end, it might make us at least more honest about ourselves.</p><p><em>Originally published at </em><a href="https://do.minik.us/blog/big-data-end-of-cheating"><em>do.minik.us</em></a><em>.</em></p><p><em>If you like this article, please </em>❤/👏<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=124bfb2a5553" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Silent Augmented Reality]]></title>
            <link>https://medium.com/hackernoon/silent-augmented-reality-f0f7614cab32?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/f0f7614cab32</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[interaction-design]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[future]]></category>
            <category><![CDATA[augmented-reality]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Wed, 23 Aug 2017 13:18:55 GMT</pubDate>
            <atom:updated>2019-04-29T07:24:28.357Z</atom:updated>
            <content:encoded><![CDATA[<p><em>(This is part 2. Read </em><a href="https://hackernoon.com/can-augmented-reality-solve-mobile-visualization-f06c008f8f84"><em>part 1 on augmented reality visualizations here</em></a><em>.)</em></p><p>Every time I read some article on Augmented Reality or see it pop up in a movie/TV show, it looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KiLSnUSS-b7BTYjmHJWNWA.jpeg" /><figcaption>Still from Keiichi Matsuda’s <a href="http://hyper-reality.co/">HYPER-REALITY</a></figcaption></figure><p>Nothing screams <a href="https://hackernoon.com/tagged/future">future</a> like neon colors and flashy animations. And I don’t excuse myself for that: even in <a href="https://hackernoon.com/can-augmented-reality-solve-mobile-visualization-f06c008f8f84">my last article on how to bring data visualization to AR</a>, my examples were high in the neon department to make it obvious at first glance that This Is The Future<strong>™</strong>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7YGxtoI3RWn3hXNWYd9neA.jpeg" /></figure><p>It reminds me somewhat of ‘90s web design, back when blinking text and construction GIFs were all the rage.</p><p>Now, while this type of augmentation might be clearly visible and extremely flexible (pack all your data into whatever representation you want), it is also highly distracting:</p><ul><li><strong>Primary colors and animations are geared towards catching our attention</strong>. Actually focusing on tasks at hand might become hard in such an environment (imagine reading a scientific paper in a casino).</li><li><strong>Virtual objects overlap physical objects</strong>. Which might be clearly a problem when the hidden physical object is a car speeding towards you, but can also be annoying in more benign situations, like searching for your car keys that are hiding behind a virtual bar chart.</li></ul><p>So what to do about these issues?</p><p>The whole premise of this article and the <a href="https://hackernoon.com/can-augmented-reality-solve-mobile-visualization-f06c008f8f84">one before it</a> was that near-future AR setups couple see-through-glasses with automatic object recognition. Which also means that these systems can shape your reality in any way they want.</p><p>They don’t have to display neon-blue blinking squares but can create whatever they want and overlay it on your reality.</p><p>How about using these powers of augmentation to create <strong>silent augmented reality</strong>? Augmentation that helps you, but becomes (mostly) seamless, blended into reality. No more constant overwhelm with animations and garish colors, but additional, useful, non-distracting information.</p><h3>Realistic virtual objects</h3><p>While it might seem required to reach Cyberpunk cred, Augmented Reality doesn’t have to display anything neon.</p><p>Computer graphics have come far enough in the last decades to generate seemingly realistic objects and use them to represent data.</p><p>While we could, of course, just throw a neon-blue bar chart into the reader’s face, we could also use their current surroundings for inspiration to show something like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8ZdaTLtOTf8qEZDw2roDLg.png" /><figcaption>The world’s most serene bar chart</figcaption></figure><p>Imagine those rocks being created by the AR engine and only visible through glasses, which means they’re just as unreal as any of the neon AR we’ve seen before. Still, they’re fitting much better with their environment, creating none of the jarring reality breaks. And they’re just as good at their job of displaying data.</p><p>If the AR system is aware of a person’s surroundings it can react with suitable virtual objects. There’s massive libraries on the web filled with realistic 3D models of rocks, seashells, books, small animals etc etc. — all suitable objects to be used for visualization. Combine those with some realistic looking shadows et voilà. And yes, there are even 3D models of pies to be used for you-know-which-chart.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ztAEwLTtoS5IoV-Soqzdhg.png" /><figcaption>Books chart or chart books? Photo by <a href="https://unsplash.com/photos/y0Fa1DEKOKs?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Samuel Zeller</a> on <a href="https://unsplash.com/?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>To uphold the illusion of reality, these virtual objects should appear real at first glance, which makes shadows, occlusion, and perspective important. But other aspects of their behavior also have to be tightly controlled: your beach rock bar chart can’t just appear out of thin air. The virtual rocks might either drop from the sky — or maybe less irritating — slowly grow out of the ground. Physics engines for video games seem like a great fit for these problems.</p><p>If we loosen our rules a bit and allow displays that consist of seemingly real objects but are definitely nothing that you would see when you take your glasses off, other interesting things become possible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7ozcjrnF8nY30XgQHgxlZw.jpeg" /><figcaption>Mike Kelley: <a href="http://www.mpkelley.com/projects/">Airportraits</a></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9z2LK8BEDC7qt_oIj54nbw.png" /><figcaption>Dennis Hlysnky: <a href="https://vimeo.com/84361066">1 2014 starlings 00011</a></figcaption></figure><p>Virtual planes demonstrate the business of an airport over the day, just as virtual birds (in conjunction with their originators) show their flight paths.</p><p>While these examples were constructed through photo manipulation (either by directly stitching together photos of planes or combining video frames with a filter), one could imagine them happening in real-time and augmented reality.</p><p>But we don’t have to stop at adding realistic virtual objects. AR can just as well be used to distort and manipulate the real world itself.</p><h3>Reality distortion</h3><p>Given that our Augmented Reality system can recognize real-world objects and create arbitrary graphics to overlay on top of them, something even wilder becomes possible:</p><p><strong>Augmented reality is not restricted to creating virtual objects situated in the real world. It can also manipulate real-world objects to make them (seemingly) shrink, grow or even disappear.</strong></p><p>Maybe you have an ugly dent in your car: since your AR glasses track where you’re looking, they could display an overlay of perfectly shiny car paint overlaying the dent, thus making it disappear. Similarly, objects such as the dirty laundry on your bedroom floor could be hidden by overlaying a texture of the clean floor on top of it.</p><p>And once these real-world objects are (apparently) gone, the system can generate a new representation of them — manipulated in size, shape, color etc. — and display it on top of the (hidden) object to encode data.</p><p>In case you want to declare me completely crazy now, watch this short demo by <a href="http://labs.laan.com/">Laan Labs</a>, running on an unmodified iPhone using the new ARKit:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F0yKJHKQU81I%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D0yKJHKQU81I&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0yKJHKQU81I%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/86ccfde30ad1fedb3041d056e08c8ef7/href">https://medium.com/media/86ccfde30ad1fedb3041d056e08c8ef7/href</a></iframe><p>And this is just the current state of this trick. Imagine what it will look like in ten years time.</p><p>This reality distortion requires object recognition as well as the ability to fill in occluded (and thus invisible) backgrounds. All things that even today’s image manipulation software is already able to do. Think Photoshop’s Content-Aware fill on your nose:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VVjRz6aMzJAm7Fg__qvgVA.png" /><figcaption>Photoshop’s Content-Aware Fill function (image by <a href="https://creativepro.com/review-adobe-photoshop-cs5/">Creative Pro</a>)</figcaption></figure><p>Once we have such a system capable of recognizing physical objects and manipulating our visual perception of them, we can create data visualizations without garish overlays or virtual objects: the actual world becomes our building material to map information to.</p><p>When we look at the basics of data visualization, there’s only a handful of so-called <em>visual variables</em> that let us encode information. They’re all based on our visual perception and which attributes of objects we can distinguish, such as position, size, brightness, area, color, etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/630/1*oz71UmUxuGETCq38eh29tw.png" /><figcaption>Jacques Bertin: Visual Variables from ‘Sémiologie graphique’ (1967)</figcaption></figure><p>The concept of visual variables also makes it possible to take apart every type of chart into these basic building blocks: a scatterplot encodes data as horizontal and vertical position. We can encode an additional attribute for each item as size, thus turning it into a bubble chart and so on.</p><p>The process of data visualization is at its most basic just mapping numbers from the data to these visual variables.</p><p>With the idea of visual variables and a powerful reality distortion AR, we can create data visualizations from everyday objects:</p><h4>Color + Brightness + Texture</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_y1nY6FkLpJckDEQtcM2ZQ.png" /><figcaption>It’s ok, I’m sure it’s just sleeping.</figcaption></figure><p>These are some obvious ones — just recolor real-world objects to encode data. The plant on your window sill which you’ve ignored for days might already show hints of becoming brown, but AR datavis can really emphasize its sorry state, hopefully spurring you into action.</p><p>To improve its chances, the AR could also enhance its brightness, making it stand out more among all the other objects by your window.</p><p>Finally, AR could also change its texture, making it look more stripey or spiky, whatever might grab your attention —maybe even displaying snake-like animations slowly flowing along the plant as a last resort.</p><p>All these visual manipulations do not require getting rid of the actual object by hiding it behind some simulated background. It’s enough to create an overlay in the right size and shape on top of the actual object. The data we visualize (e.g., the dryness of the plant’s soil) determines the amount of color, brightness or texture.</p><p>To make the most of a data visualization, you’d probably also want to compare multiple objects — maybe see which of your plants actually needs water the most — something which should work well even with just digitally changing their color, brightness or texture.</p><h4>Size + Orientation + Shape</h4><p>These visual variables are where we’re moving from subtle augmentations to more drastic ones. By hiding the actual objects and distorting a virtual representation of them, AR can modify an object’s size, orientation or shape for data visualization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Qil-uFhD6Z7g0aCyjwmTow.jpeg" /></figure><p>Visualizing data in a supermarket doesn’t have to happen with a heat map as in the example above. The objects themselves could also be visually distorted to reflect data.</p><p>Maybe the neat rows of products become bar charts themselves, by distorting the height of a stack of butter.</p><p>They could also shrink in size to represent how far they had to travel to get there, borrowing a perspective metaphor (and making selecting local products much easier).</p><p>And in addition to that, they could also slightly rotate or even change their shape to encode information — be it nutritional or about their prize.</p><h4>Position</h4><p>The most powerful and easy to read visual variable — position — is the one which should probably be used the most sparingly in AR datavis.</p><p>All of the above reality distortions make interaction with those (actual, physical) objects already somewhat harder. It becomes difficult to take a product out of the supermarket shelf if its size or orientation has been manipulated.</p><p>But what’s arguably worse is having these products floating around in space, or moving unpredictably across the shelf. What helps with readability in a datavis chart might not necessarily help with interacting with the real world.</p><p>Combine that with an additional array of the virtual realistic objects from above, and we’re deep in confusion country.</p><h3>Reality Hacking</h3><p>What all of these ideas have in common is that they’re messing with the basics of our perception. Things we’ve learned since our earliest days — that most of what we see is real, that objects can be grabbed and interacted with — no longer applies, when half of the objects are virtually generated and the other half is thoroughly distorted.</p><p>These extended Augmented Reality techniques are reality hacking. Our own version of reality becomes not only machine readable but also <em>writable — </em>with all consequences. Which should lead designers and technologists to tread very very carefully and ask themselves the right questions before every decision.</p><p>Being the <a href="https://hackernoon.com/tagged/technology">technology</a> optimist that I am, I hope that early reflections such as this one can push us and our technology to a good outcome. The more we reflect now on the implications of this technology, the less mistakes we’ll make once AR glasses with perfect object recognition and manipulation are on everyone’s noses.</p><p>So that our AR future may look more like this:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F224876461%3Fapp_id%3D122963&amp;dntp=1&amp;display_name=Vimeo&amp;url=https%3A%2F%2Fvimeo.com%2F224876461&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F644027207_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="901" frameborder="0" scrolling="no"><a href="https://medium.com/media/e26624195bb80bb8048e2e14bfca09b1/href">https://medium.com/media/e26624195bb80bb8048e2e14bfca09b1/href</a></iframe><p>than this:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F8569187%3Fapp_id%3D122963&amp;dntp=1&amp;url=https%3A%2F%2Fvimeo.com%2F8569187&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F40434092_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1280" height="720" frameborder="0" scrolling="no"><a href="https://medium.com/media/cd8a54e46ea666c5277610ba175681da/href">https://medium.com/media/cd8a54e46ea666c5277610ba175681da/href</a></iframe><p><em>If you like this article, please </em>❤/👏<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><p><em>I’m deeply grateful to </em><a href="https://medium.com/u/c8d36315cca4"><em>Alice Thudt</em></a><em> for her brilliant suggestions and comments on the article.</em></p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fupscri.be%2Fdde502%3Fas_embed%3Dtrue&amp;dntp=1&amp;url=https%3A%2F%2Fupscri.be%2Fhackernoon%2F&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=upscri" width="800" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f0f7614cab32" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hackernoon/silent-augmented-reality-f0f7614cab32">Silent Augmented Reality</a> was originally published in <a href="https://medium.com/hackernoon">HackerNoon.com</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Can Augmented Reality solve Mobile Visualization?]]></title>
            <link>https://medium.com/hackernoon/can-augmented-reality-solve-mobile-visualization-f06c008f8f84?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/f06c008f8f84</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[future]]></category>
            <category><![CDATA[interaction-design]]></category>
            <category><![CDATA[augmented-reality]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Mon, 14 Aug 2017 22:03:45 GMT</pubDate>
            <atom:updated>2019-04-29T07:25:07.137Z</atom:updated>
            <content:encoded><![CDATA[<p><em>(This is part 1. Read </em><a href="https://hackernoon.com/silent-augmented-reality-f0f7614cab32"><em>part 2 on creating silent augmented reality here.</em></a><em>)</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zoZASLdqj-a0Yj9DgAs8xQ.jpeg" /><figcaption>Photo by <a href="https://flic.kr/p/8QRZrk">Samuel Huron</a></figcaption></figure><p>Data visualization on mobile devices seemed promising, since the time of the first iPhone: very capable portable computers! Innovative touch interaction! Highly localized content! Hundreds of visualizations for mobile devices exist, both as apps and part of daily news content. But there’s one major problem that mobile visualizations couldn’t shake yet:</p><blockquote>There’s just never enough space.</blockquote><p>Mobile displays are necessarily small to be portable, and then there’s also fingers in the way. Usually with data visualization, more screen space means better analysis: Data can be shown at a higher resolution, uncovering smaller relationships and parts of the data. It also becomes possible to show multiple charts side-by-side and having coordinated views, quickly flicking to and fro from one lens on the data to the other.</p><p>It’s the difference between having your chart on a sticky note or across two side-by-side posters.</p><p>I think that Augmented Reality (AR) could solve this problem.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/375/1*cK7YK3qfGeqig_srcTm8ng.gif" /><figcaption><a href="https://support.snapchat.com/en-US/a/lenses1">Snapchat Lenses</a></figcaption></figure><p>AR has mainly been a research direction for the last thirty-odd years, but is now slowly entering the tech mainstream. AR overlays reality with virtual information, which appears to be part of the environment (depending on your company affiliation, the principle could also be called Mixed Reality). Think: <a href="https://support.snapchat.com/en-US/a/lenses1">Snapchat Lenses</a> with face tracking, Microsoft’s <a href="https://youtu.be/xgakdcEzVwg?t=2m40s">HoloLens letting you play Minecraft</a> on your couch table or Apple’s recent <a href="https://developer.apple.com/arkit/">ARKit</a>.</p><p>When AR was initially proposed, headsets were usually bulky with low resolutions and low refresh rates. Hickups between head movement and on-screen content made it hard to keep up the illusion of actual virtual elements in your physical world, maybe even leading to cyber sickness.</p><p>Smartphone production and improvements in <a href="https://hackernoon.com/tagged/technology">technology</a> leading to lower prices for better components have both revolutionized Virtual Reality (see Oculus) and also Augmented Reality — since they’re both building on similar technologies (small, high-resolution portable displays, head tracking, etc).</p><p>So while our current AR is mostly based on the peephole metaphor (you’re looking through your phone to see face filters or Pokémon), <a href="https://hackernoon.com/tagged/future">future</a> AR - the one we’re interested in for this article - should work with a headset only — hopefully very non-intrusive glasses — leaving your hands free to interact with (augmented) reality.</p><p>So how could this future AR solve the lack of screen space for mobile visualizations?</p><p>By augmenting your reality, AR puts screens everywhere and nowhere at the same time. Screens become fully virtual — with all advantages. Lack of screen space no longer exists, since these virtual screens can potentially fill your whole field-of-view (and beyond).</p><p>Plus, just like other mobile devices, AR devices know where you are in the world (thanks to geolocation) but even more: where and what your are currently looking at! Combine that with automated object recognition and you have all kinds of fascinating applications for datavis opening up.</p><p>To be more specific, I can see three promising directions for these future Augmented Reality visualizations:</p><h3>Situated personal visualizations</h3><p>One of the promises of mobile visualizations has always been creating a personalized experience of data. Various apps make use of your location to e.g. center a map on your current position (Google Maps), show restaurants around you (Yelp) or keep track of your running routes (Runkeeper).</p><p>AR visualization has the potential to become even more personal: knowing about your preferences and goals, it can display the right data at the right time. But what’s even more interesting is that everything can be <em>situated</em> in the right place:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7YGxtoI3RWn3hXNWYd9neA.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@osmanrana">Osman Rana</a> (<a href="https://unsplash.com/photos/HOtPD7Z_74s">Unsplash</a>), augmented by the author</figcaption></figure><p>AR Visualizations become part of your environment, augmenting real-world objects (and <a href="https://creators.vice.com/en_us/article/78epxb/artist-kickstarts-augmented-reality-video-series">people</a>) with relevant information, placed at just the right positions. Visualizations are no longer images displayed in little glowing boxes, but augmented textures to the world.</p><p>Imagine trying to make your way across Boston in a snow storm. Your app knows exactly where you want to go and can draw from relevant information from the internet. It also knows that you’re <em>looking</em> at a bus right now and helps you in making decisions: should I get on the bus or take an Uber? Do I have to hurry? Where do I have to transfer? How long will it take? And will this snow ever end?</p><p>All this information is right where you need it and completely private — no one else can see what you see. It’s the ultimate expression of personal visualization.</p><p>I like to see this development as a form of empowerment, if we get it right (if not: see the <a href="https://vimeo.com/166807261">works</a> of <a href="http://km.cx/">Keiichi Matsuda</a> or various dystopian SciFi).</p><p>Similarly, AR vis can show data that’s relevant to you but maybe not to everybody.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Qil-uFhD6Z7g0aCyjwmTow.jpeg" /><figcaption>Photo by <a href="https://flic.kr/p/9Mcxq">Flako</a>, augmented by the author</figcaption></figure><p>You might be wandering along the aisles of your supermarket, being bombarded by messages of abundance. Package design in recent years hasn’t necessarily developed towards making nutritional information easier accessible. I value the hours I spend staring at small print labels, results of months of heated discussions between industry and administration.</p><p>While there are <a href="https://ndb.nal.usda.gov/ndb/">databases for nutritional information</a> freely available, typing product names into a search box, selecting the right one, checking the info and repeating it for every sparkly box on the shelf sounds … unappealing.</p><p>What if the machine could help you with that, take your own preferences when it comes to nutrition (and dietary restrictions to boot) and display a simple heat map, pointing you at just the right products? Once this filtering is done you can still go through the final candidates and make your own, much more informed, decision. And everyone gets their own custom heatmap, since there’s more than enough space for that in virtuality.</p><p>What all these projects have in common, is that they’re mainly interfaces for data access. The info is usually somewhere out there on the internet but only accessible via text search or obscure API calls. The beauty of AR with its camera-based object recognition is that the whole world becomes the interface to this data: just look at something to receive more information.</p><p>Arguably, the same way we did information access since the dawn of time.</p><h3>3D visualizations</h3><p>You heard me right, I’m also crossing that line here: I think that 3D visualizations could potentially be much more useful and even become mainstream with Augmented Reality.</p><p>3D visualizations are shunned in the datavis community, fueled by scores of bad 3D Excel charts and <a href="https://www.wired.com/2008/02/macworlds-iphon/">blatant marketing deception</a>. But if you look at the perceptual science behind them, they might actually not be that terrible (see <a href="https://eagereyes.org/blog/2016/3d-bar-charts-considered-not-that-harmful">Robert Kosara’s great discussion on his blog</a>). Sure, they suffer from occlusion and perspective distortions, but maybe the additional spatial dimension might make up for that. Especially when we combine them with the most important feature of AR: situatedness in the real world.</p><p>Imagine having your awful awful 3D bar chart situated on your coffee table, right in front of you. There’s occlusion (front bars occlude back bars) and distortion (back bars look relative smaller).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*w5IFCyoyIVw4MYa_syS0gA.jpeg" /><figcaption>A 3D visualization of me dragging boxes in SketchUp</figcaption></figure><p>But in AR, the main problem of 3D vis - the virtuality of it all - is less pronounced. The abstract bars become parts of your environment — it’s clear that the bars in the back look smaller than they actually are (compare them to the physical book next to them). Similarly, occlusion is easily solved by moving around the table, just as you would in the real world. Additional stereo cues (hard to show in a 2D photo) make the virtual bars seem more real than they are.</p><p>This follows the ideas of <a href="https://en.wikipedia.org/wiki/Embodied_cognition">Embodied Cognition</a>, a theory that postulates that our cognition is much more closely coupled to our bodily existence than might be apparent. With AR, you still have your own body available to you to explore data as part of your environment. This is in contrast to Virtual Reality, where you’re completely isolated from both your environment and your body — which can be highly disorienting.</p><p>Sporadic research here and there points at AR 3D vis being a promising direction (<a href="http://dl.acm.org/citation.cfm?id=1080411">Ware and Mitchell created a highly efficient 3D node chart in 2005</a>, and there’s a workshop series on <a href="http://immersiveanalytics.net/">Immersive Analytics</a>), but I figure that the proliferation of AR toolkits will lead to a lot more results in the near future.</p><p>And yes, there will be the trend of cramming the first AR apps to the brim with flashy 3D stuff. Just like the first new 3D movies insisted on always throwing virtual objects at the audience. But that will subside eventually and make 3D a viable approach in the data visualizer’s toolkit.</p><h3>Floating screens everywhere</h3><p>Finally, AR solves the Mobile Vis dilemma of never enough space by simply letting you create as many arbitrarily-sized and -shaped displays as you need. Free-floating around you, still 2D, but as big as you need them.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FmodVbsR8NvMGPExiAypSw.jpeg" /><figcaption>Just some light network analysis for the holidays. Photo by <a href="https://unsplash.com/photos/IjBeth_piMY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Aidan Meyer</a>, augmented by the author.</figcaption></figure><p>Since full AR devices have their physical screens right on your nose, they can create an infinite number of arbitrarily-sized virtual screens, situated in your environment. Head tracking lets you switch between them just as you would with physical screens: simply by turning your head.</p><p>These virtual screens are interesting when it comes to resolution — since they’re just simulations their resolution can be as finely grained as you need them to be. The physical resolution of the AR headset always stays the same, but when there’s only part of a virtual screen visible (since you’re standing close to one), the full physical resolution is mapped to this part of the virtual screen. This works well, since our eyes are not capable of seeing everything infinitely sharp. We usually move around our environment, coming closer for in-depth inspection. Same with virtual screens — while they might be relatively low-res from afar, you can move as close as you like to see infinitely fine details.</p><p>There is some ideas in this direction — Isenberg et al. describe “<a href="https://petra.isenberg.cc/publications/papers/Isenberg_2013_HIV.pdf">Hybrid-Image Visualizations</a>” that show different types of visualizations at different viewing distances. Similarly, the static <a href="http://fatfonts.org/">Fat Fonts</a> encode multiple layers of values through symbols and brightness, showing different aspects for different viewing distances. And, of course, highly detailed paper-based visualizations inherently allow accessing the data at overview- or detail levels.</p><p>These virtual 2D screens are also a great transitional technology, until all legacy applications have been mapped to an AR-context (if that ever happens). You can think of them as virtual monitors, same as your regular physical ones, just instantaneous, free and 100% eco-friendly.</p><p>And another advantage, especially when it comes to visualizing sensitive data — be it personal or business — in a public setting: they’re completely invisible to everyone else. The problem of “<a href="https://en.wikipedia.org/wiki/Shoulder_surfing_(computer_security)">shoulder surfing</a>” does not exist in AR, so no one can see what’s on your virtual screen, even though it might fill your whole field-of-view¹.</p><h3>Shut up and take my money</h3><p>So, how much longer until we can get to work on AR visualizations?</p><p>The release of tools like ARKit will make powerful AR apps much more common in the next years. Another nice side-effect for Apple is that once display technology becomes mature (and especially thin) enough that they can release their own set of AR glasses (which <a href="https://www.theverge.com/2017/7/27/16049906/apple-augmented-reality-glasses-patent-application">their patents point at</a>), they already have an App Store full of AR-compatible apps.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0Z3fS-haZ2qY_UuDbIjJYw.png" /><figcaption>From an Apple patent on AR tech (source: <a href="https://www.theverge.com/2017/7/27/16049906/apple-augmented-reality-glasses-patent-application">The Verge</a>)</figcaption></figure><p>Microsoft’s <a href="http://www.microsoft.com/en-us/hololens">HoloLens</a> is already available, not very portable and dorky, but almost magical when you put it on. Google has <a href="https://vr.google.com/daydream/">Daydream</a> and <a href="https://developers.google.com/tango/">Tango</a>, which might make a powerful combination for Augmented Reality. We’ll see which of the tech giants will be the first to bring AR to the mainstream.</p><p>But in any case: smartphones are a transitional technology — we’re actually working towards a world without screens, where everything is a screen. And in this world, all the screen space you could ever want for your data visualizations is instantly available.</p><p><em>(This is part 1. Read </em><a href="https://hackernoon.com/silent-augmented-reality-f0f7614cab32"><em>part 2 on creating silent augmented reality here.</em></a><em>)</em></p><p><em>If you like this article, please </em>❤/👏<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><h4>Notes</h4><p>¹ Admittedly, shoulder surfing <em>might</em> be possible for someone standing so awkwardly behind you that they can read what’s on the inside of your high-resolution glasses. All the spoils would be deserved.</p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fupscri.be%2Fdde502%3Fas_embed%3Dtrue&amp;dntp=1&amp;url=https%3A%2F%2Fupscri.be%2Fhackernoon%2F&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=upscri" width="800" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f06c008f8f84" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hackernoon/can-augmented-reality-solve-mobile-visualization-f06c008f8f84">Can Augmented Reality solve Mobile Visualization?</a> was originally published in <a href="https://medium.com/hackernoon">HackerNoon.com</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What Apple, Fiverr and Nigerian Scammers have in common]]></title>
            <link>https://medium.com/@dominikus/what-apple-fiverr-and-nigerian-scammers-have-in-common-befcd644be6d?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/befcd644be6d</guid>
            <category><![CDATA[advertising]]></category>
            <category><![CDATA[marketing]]></category>
            <category><![CDATA[twitter]]></category>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Mon, 12 Jun 2017 21:26:48 GMT</pubDate>
            <atom:updated>2017-06-12T21:29:48.374Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Apple’s ad campaign for Planet of the Apps might be about more than outrage</em></p><p>A few days ago, Apple launched their first TV show “Planet of the Apps” to great fanfare. It’s more or less a spin-off of “Shark Tank” with teams of developers pitching their app ideas to an all-star jury with such tech icons as Jessica Alba and Gwyneth Paltrow. The winners receive funding from a VC firm, enabling them to live their dream of developing software.</p><p>While few people might have been aware of the show’s existence, this weekend’s Twitter outrage over a (by now pulled) ad for it might have changed that:</p><h3>Jason Fried on Twitter</h3><p>Pathetic... even Apple is promoting workaholism now. Check out this ad for their Planet Of The Apps show.</p><p>Reactions, at least in my Twitter feed, consistently fell on the spectrum between outrage and disgust.</p><p>Similarly, freelance marketplace Fiverr had some fun with ads on the New York Subway a few months back:</p><h3>it&#39;s B! Cavello ✊️ on Twitter</h3><p>The &quot;gig economy&quot; is literally killing us. Most depressing ad of the day goes to: @fiverr 🙃</p><p>Fiverr’s business model is like Uber for designers — instead of charging adequate and life-sustaining prices for your work, you give it away for literally a fiver. Subsequently, “doers” are preferred.</p><p>In addition to the predictable Twitter outrage+disgust package, the ad also spawned an in-official follow-up:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lMjMSxn_hX9M1ijUB4uu0w.jpeg" /></figure><p>So, why would companies pay for ads like that? Is it just about the there’s-no-such-thing-as-bad-publicity? Are they purposefully triggering Twitter for free exposure?</p><p>It could also be that we are just not the target audience.</p><p>A staple of the internet of the naughts were <a href="https://en.wikipedia.org/wiki/Advance-fee_scam">419 or Nigerian scam mails</a>. Millions of super-rich uncles, presidents and princes had to perish in order to leave behind Quadrillions of Dollars for hopeful recipients. When reading one of these emails (they still trickle in from time to time) one might be wondering who would fall for such an obvious scam. And that is exactly the point.</p><p>A <a href="https://www.microsoft.com/en-us/research/publication/why-do-nigerian-scammers-say-they-are-from-nigeria/">study by Cormac Herley from Microsoft</a> (and popularized by Levitt and Dubner in <a href="http://freakonomics.com/">Freakonomics</a>) discusses the basic economic problem that scammers are facing: while it’s cheap to send out millions of spam mails, you only have very limited resources for interacting with (and actually scamming) your victims. Every interaction via email or phone costs time, so by adding typos and coming up with ridiculous stories you automatically vet people who would never fall for your scam in the first place. Only the most gullible (and greedy) remain and you can throw your emails at them, hoping for a big payout.</p><p>Herley explains that this seems like the most economically sound strategy to avoid “false positives” (that is, people who answer a scam email but where all manpower to convince them to hand over their bank account will be wasted).</p><p>While Apple and Fiverr’s business models are obviously different from that of Nigerian Scammers, their ad strategies for the above examples might work along similar lines.</p><p>Just like scammers, Planet of the Apps and Fiverr want to keep their rate of false positives as low as possible. They want to make sure that everyone they interact with and spend time and money on will perform as well as possible for their purpose.</p><p>App developers who in the middle of the show’s season decide that these weird little humans from their emails might be worth spending time with are to be shunned, just like designers who at some point realize that sleeping at least once in three days might be preferable to doing another website for far less than minimum wage.</p><p>By putting out ads like the ones above you make sure that such people won’t even think about applying for your service. People whose initial reaction is furiously hitting the Quote Tweet button for snark while mumbling something about work-life-balance are not the targets. Not only do you avoid sorting them out later (at much higher cost), but they might even produce free, outrage-driven publicity in the process (this article not excluded).</p><p>This becomes even more effective by picking extreme, over-the-top slogans: just like Nigerian Scammers wouldn’t tease you with $250 but always promise at least a Googol of cash, these ads make sure that their targets are still excited about the offerings despite the most awful downsides:</p><p>It’s not that you might spend less time with friends as an app developer — you’re no longer able to take care of your children, the people that are probably the most reliant on you.</p><p>It’s also not that you leave the office a little later from time to time as a designer — you completely forgo the needs of your body, burning out both mentally and physically.</p><p>And if these prospects don’t thrill you— congratulations, you just saved Apple and Fiverr a false positive.</p><p><em>If you like this article, please </em>❤<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general ranting.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=befcd644be6d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Static visualizations do not exist]]></title>
            <link>https://blog.prototypr.io/static-visualizations-do-not-exist-b2b8de1ed224?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2b8de1ed224</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[infographics]]></category>
            <category><![CDATA[ux]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Tue, 09 May 2017 14:16:01 GMT</pubDate>
            <atom:updated>2017-08-16T15:35:45.737Z</atom:updated>
            <content:encoded><![CDATA[<p>What we think about as static — visualizations or any other medium — does not, and even: cannot exist. Reading and understanding turns everything interactive.</p><p>Let’s take a simple example to make my point, one “static” visualization that is almost cliché: Napoleon’s march by Minard.</p><p>This graphic has been thrown around so many times since Tufte brought it up in his landmark book that you’d be hard-pressed to find a visualization course that doesn’t mention it.</p><p>Let’s look at it — once more:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TNZYA0zClT5t_mC2.png" /><figcaption>Charles Joseph Minard: <em>Carte figurative des pertes successives en hommes de l’Armée Française dans la campagne de Russie 1812–1813</em></figcaption></figure><p>If you’re like me and you spent far too much time on this stuff, you’ll probably just brush over it. It triggers the “Napoleons march by Minard” association in your head and your good ol’ lazy brain thinks: ok, been there, done that.</p><p>Try to read it, though (do it for me — I’ll wait here in my parentheses).</p><p>If you actually read Minard’s visualization, you take in one concept after the other. You might start with the headline and give up after too much French. Then you take on the main, brownish stream. Glance at the depressing black backwards stream. Then brush over some numbers and maybe the scale at the bottom.</p><p>Having understood what’s there and how it’s organized, you actually start to take in the data. You read from left to right, because that’s how you learned to do it, absorb Polish city names, look at the smaller and smaller army arrow and the weird turns they make. Once at Moscow, you take the trip back, follow the black area-turning-line. After that, you realize why Minard put a clever temperature scale at the bottom, providing another helpful explanation to make sense the disaster.</p><p>While reading the visualization the data unfolds.</p><p>And this is no accident, this is just the only way we can ever make sense of a visualization.</p><p>If we tried to visualize your reading the visualization (so meta) it might look like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hP0pX_0s55IOBsAINuN3ew.png" /><figcaption>Reading Minard</figcaption></figure><p>If you read a visualization it’s no longer static. It can’t be. Just like words, reading always happens in time.</p><p>We can’t have it any other way. We as humans are temporal beings, one moment after the other until the last.</p><p>While going through time, our mind changes, our ideas and concepts of the world. We learn new things constantly, if we want it or not.</p><p>Think about every visualization you’ve ever seen (I won’t wait this time). You might have the habit of splitting them into “static” and “interactive” (the ones with buttons), because that’s how you learned to do it. We people are just amazingly fond of drawing borders.</p><p>But in this case, it does not make any sense.</p><p>Here’s a shitty infographic:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*hStxfCgYZRGgCr1X." /><figcaption>Mmmmh, <a href="http://www.ucsusa.org/clean-vehicles/electric-vehicles/northeast-electric-cars#.WRDZ3OU18vg">traffic light pie</a>…</figcaption></figure><p>So little data! So big bubbles! So little interaction!</p><p>But interaction is all there is. Think through how you would read it. Again, starting with the headline, looking at visually prominent parts, reading the big big numbers, understanding the concept and finally absorbing the data.</p><p>No matter what you’re taught — even the thinnest infographic is inherently interactive. It can’t be any other way.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*CpGU5I6mWkvu0k6U.jpg" /></figure><p>Reading any visualization is a two-step process:</p><p>1. Understanding the encoding<br>2. Reading the data</p><p>The first part, grasping the encoding, the visual mapping or whatever you want to call it, teaches you the meaning of all those circles, rectangles and lines. You find the connection between some topic (soldiers of the French army) and geometric shapes (width of arrows).</p><p>And usually, there’s more than one type of data to absorb. In Minard’s case, we also see the temperature, we see city names and even dates (usually, if a graphic only contains a single type of data we tend to sort it into the shitty-infographic-category — just too little data to bother).</p><p>If we have something like visualization literacy, we also come pre-equipped with a set of charts we readily recognize and understand. Making sense of something like a map or a timeline seems obvious to us, but it wasn’t always. Every type of chart has been invented at some point and we just learned it. Now, visualization designers can build on that knowledge, making this first step far easier, and come up with ever-cleverer combinations of charts.</p><p>In Minard’s example, we have a shrinking area moving along a map and a timeline at the bottom. In the shitty infographic, we have circle area/angles.</p><p>You could call this the “onboarding” part. Teaching people what they have in front of them and how to use it. If it’s done well, this step is painless.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sVBhI0oKbWwOvmq_." /></figure><p>If it’s done poorly, people will get lost in geometry soup.</p><p>Once we understand what the shapes mean we can start reading the data. And while the first part — if well-done — is relatively straightforward, this second part of actually understanding what’s happening is everything but.</p><p>Depending on your preferences, your knowledge of the data and subject matter, you can start reading wherever you like. Clearly structured visualizations like Minard’s give you a good idea here — the visual shapes point you towards the right way (sorry). Visual hierarchy establishes a reading hierarchy.</p><p>But other examples aren’t as obvious:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*B0OVwe2u38kDfIotURfT8A.png" /><figcaption>New York Times: <a href="https://www.nytimes.com/interactive/2015/05/15/upshot/the-places-that-discourage-marriage-most.html">How Your Hometown Affects Your Chances of Marriage</a></figcaption></figure><p>Especially in scatterplots, every entry point makes sense. Start with the overall blob of points in the middle – the “normal” – or look at the outliers. Follow one axis or the other. Search for specific points of interest to<br>you.</p><p>Once a visualization no longer tells a linear story, it’s choose-your-own-adventure. Any story you can find is fine — it’s your very own.</p><p>Except for the main two-step-process of understanding and reading, every other order is made by none other than the reader.</p><p>Here’s a great example by artist David Hockney (via <a href="http://feltron.com/PhotoViz.html">Nicholas Felton’s excellent Photoviz</a> and <a href="https://www.frankchimero.com/writing/the-webs-grain/">Frank Chimero’s ‘The Web’s Grain’ essay</a>):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*boJILAgLQxXbeCsS.jpg" /><figcaption>David Hockney: <a href="http://www.christies.com/lotfinder/Lot/david-hockney-b-1937-the-scrabble-5532678-details.aspx">The Scrabble Game</a> (1983)</figcaption></figure><p>Hockney encodes the temporal aspect of the Scrabble game through spatial geometry. Looking at the piece, readers decode the game’s time by looking at this part, then that, then the other one. Playing it back in their own<br>heads, using their own eyes.</p><p>Arguably a visualization, <em>The Scrabble Game</em> again follows the two-step process, even without any explicit instructions.<br>We see familiar shapes of faces and a scrabble board, notice the inconsistencies and understand that time is encoded as overlapping photos. We then start reading it, jumping from picture to picture, taking in the data and forming the story.</p><p>Only a visualization no one ever sees can be static.</p><p>Visualization designers sometimes forget that visualization is a collaboration between them and the readers. And a visualization without an audience is a useless artifact. Making the encoding unnecessarily impenetrable or unlearnable or picking extra-shallow data just to create something aesthetic.</p><p>Someone invested time and energy into building the thing just to lock it up in a cupboard.</p><p>Even our “interactive” visualizations are nothing but static visualizations in time. If well-done, the encoding is consistent and it doesn’t matter if you filter for this-or-that first. It also doesn’t matter if you highlight this item or the other one. Nothing jumps around the screen and the encoding doesn’t morph halfway into something else.</p><p>You can restrict a reader’s journey through the data by pressing the “interactive” parts into a video, forcing them to sit through 2.5 minutes of you exploring the data for them, but again: all this happens in time, it has to, and even in a given frame, every one looks at something else (see the famous <a href="https://www.youtube.com/watch?v=vJG698U2Mvo">Selective Attention experiments</a> in psychology).</p><p>And this is the same for any medium. Books and other textual mediums give us a relatively clear structure, which we still constantly ignore by going back and forth. While websites exist as infinite rectangles, people scroll up and down, focus on this and that. What we think of as a static website is really an endless series of short glimpses, bits of text and image parts, strained through our mind’s sieve.</p><p>Since visualizations are nothing but a practical form of pressing data through our eyeballs, we can’t escape their restrictions. Small eye movements (saccades) happen every 20 ms at rapid speeds. Every time we take in a small part of the world. Our brain manages to present us the illusion of a stable environment instead of a crazy roller coaster of dashing shapes.</p><p>One level up, the same happens when reading. We take in information, we understand and then ponder it. Our minds trick us into believing we had absorbed a whole text, while in actuality we’ve only kept the main points — everything in bold and whatever seemed fishy to us — and that we finished it and can take it off our bucket list.</p><p>Reading a text or visualization is exactly the same. Which is why visualizations can never be static, no matter what your textbook says.</p><p>We temporal beings are trapped in moments. We gather new information constantly, learn new things, but have them interact with what we already know and our more immediate memories. This beautiful dance of thoughts can only work in time.</p><p>And just as you can’t understand a visualization by looking at it (<em>ooh pretty!</em>), you can’t make sense of them without taking them in in your own time.</p><p><em>If you like this article, please </em>❤<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><p><em>Also, let me know what you think in the comments!</em></p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fupscri.be%2Ff51076%3Fas_embed%3Dtrue&amp;dntp=1&amp;url=https%3A%2F%2Fupscri.be%2Ff51076%2F&amp;image=http%3A%2F%2Fapi.screenshotlayer.com%2Fapi%2Fcapture%3Faccess_key%3Dfe59908dad3baab69ffab249a2224b03%26viewport%3D1024x612%26width%3D1000%26url%3Dhttps%253A%252F%252Fupscri.be%252Ff51076%253Fscreenshot&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=upscri" width="800" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/b85dfbb5286d8a25cf2e754b9462cf45/href">https://medium.com/media/b85dfbb5286d8a25cf2e754b9462cf45/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2b8de1ed224" width="1" height="1" alt=""><hr><p><a href="https://blog.prototypr.io/static-visualizations-do-not-exist-b2b8de1ed224">Static visualizations do not exist</a> was originally published in <a href="https://blog.prototypr.io">Prototypr</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The superpower of interactive datavis? A micro-macro view!]]></title>
            <link>https://medium.com/@dominikus/the-superpower-of-interactive-datavis-a-micro-macro-view-4d027e3bdc71?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/4d027e3bdc71</guid>
            <category><![CDATA[interaction-design]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[design]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Thu, 13 Apr 2017 14:31:01 GMT</pubDate>
            <atom:updated>2017-08-16T15:37:34.242Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>“The death of one man: that is a catastrophe. One hundred thousand deaths: that is a statistic!” — Kurt Tucholsky, Französischer Witz (1925)</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*s4hGiy1gPUAHeDZZeraufw.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@delfidelarua7?photo=vfzfavUZmfc">Delfi de la Rua</a></figcaption></figure><p><em>After </em><a href="https://medium.com/startup-grind/the-end-of-interactive-visualizations-52c585dcafcb"><em>writing about the potential death of interactive visualizations</em></a><em>, I want to touch on why they’re still absolutely worth it.</em></p><p>We as humans are notoriously bad at forming a balanced and comprehensive picture of anything more complex than our shoe strings. We’re plagued by <a href="https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18#.fp2m9kbvj">various cognitive biases</a> that span everything from an <a href="https://en.wikipedia.org/wiki/Confirmation_bias">extreme interest in things that confirm our existing ideas and beliefs</a> to <a href="https://en.wikipedia.org/wiki/Information_bias_%28psychology%29">prefering lots of completely useless data to less, more focused information</a>. One of the most notorious ones is our tendency to overvalue singular examples: <strong>anecdotes</strong> are our favorite way of making sense of a situation - or more often to shoot down unpleasant truths (I’m sure you also know someone whose grandpa was a heavy smoker and made it to the age of 93).</p><p>Fortunately (and despite all these biases), we also invented a method called <strong>statistics</strong>, which promises to sieve data and only leave pure information behind. After applying various types of statistical analyses (or more newfangled machine learning methods) we usually arrive at a set of numbers that clearly and unambiguously describe our data’s distribution and whether our hypotheses were right or wrong.</p><h3>Nathan LeClaire on Twitter</h3><p>NERDS: Statistics is pretty cool guys ok WORLD: whatever NERDS: :( NERDS: it&#39;s called Machine Learning now WORLD: OMG MUST HAVE IMMEDIATELY</p><p>However, statistics sometimes seems to be too much for our feeble human minds that deep down long for stories, emotions and things they can potentially grab and smear with mud.</p><p>So how do we bridge this gap? How can we relate to parts of our data while at the same time being able to see the bigger picture?</p><p>The answer is <strong>micro-macro</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sgeHp72Uxkugh1QC60i2oQ.png" /><figcaption>Ways to describe the world: cherry-picking anecdotes, abstract descriptions from statistics and getting an overarching picture with micro-macro.</figcaption></figure><p>What I mean by <em>micro-macro</em> is trying to get a better understanding of the world by accessing it on two levels: for one, there’s the <em>micro-</em>level of anecdotes where we get the good feeling of looking at actual, concrete aspects of the world instead of abstract mathematical descriptions. But we combine this with the <em>macro-</em>level to understand how these relatable anecdotes fit into the whole.</p><p>This dual approach enables us to estimate if a given example represents normalcy (a stand-in for how things “usually” are) or is an outlier and does not allow conclusions for all cases.</p><p>Plus, we’re also avoiding a problem with statistics: by reducing complex data to simple numbers, we’re of course losing information. All outliers and other weird aspects of the data simply get smoothed out until we end up with a clear bell (or other) curve. The smoking grandpa would just never show up in a simple histogram.</p><p>Fortunately, there’s data visualization which is the premiere tool for enabling us to browse data sets in a <em>micro-macro</em> way.</p><h3>micro-macro + datavis = ❤</h3><p>Take as an example for this thoroughness of data visualization Periscopic’s <a href="http://guns.periscopic.com/?year=2013">U.S. Gun Deaths in 2013</a> visualization:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nWLeHGFQn_tUlfiiW6CVYQ.png" /><figcaption>Periscopic: <a href="http://guns.periscopic.com/?year=2013">U.S. Gun Deaths in 2013</a></figcaption></figure><p>This disturbing piece shows how many people have been shot in the US in 2013 and the resulting number of years “stolen” (based on US Census data). Every victim becomes one arc in the visualization, with their years alive colored orange and the rest in a somber grey. By showing every person as a visual element, the visualization does not only provide a more striking image of the extent of these tragedies but also avoids reducing these people to aggregated statistical numbers (average age when shot or something). Every haunting story is still in the data and the graphic.</p><p>But these anecdotes (<em>micro</em>) come in the context of the visualization (<em>macro</em>): it’s clear that a given arc is not the whole story, that a person might have been exceptionally young or old or the norm (as morbid as that is).</p><p><em>Gun Deaths</em> is also interactive and it’s possible to hover over arcs to get name, age and place of death of the victim. With interactivity it becomes possible to learn about these cases and also the outliers, like the <a href="http://www.npr.org/blogs/thetwo-way/2013/09/08/220306510/man-107-dies-in-shootout-with-police">story of 107-year old Monroe Isadore</a>, apparently the oldest victim in the data set. But thanks to the macro-level, nobody would pull him out as a representative example for all gun victims or declare gun wounds a new major health crisis for 100+ year olds.</p><p>Data sets consist of data points. At some point, these points have been collected and aggregated. This abstraction step leads to broader but less detailed descriptions like statistics. With datavis, we’re no longer trapped on the macro<em>-</em>level — undoing the abstraction and uncovering the original data points in all their glorious micro-level nature.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vW1gsZD3sAJvdd_QE0LoKQ.png" /></figure><p>When we’re looking at gun deaths in the US, for example, we have victims, their ages and place of death. For a proposed health care reform, we have people with specific needs, financial situations and existing insurances. If our topic is well-being across the world, we’re looking at specific regions with various attributes.</p><p>Statistics gives us the overarching patterns of the data in an abstract way. A visualization can show these patterns and more: it can even give us back the original data points.</p><p>Of course, there are limits to this approach: browsing through <em>every single data point</em> does not scale and would certainly be more confusing than helpful. <strong>But every single aspect, every little story we explore by hovering over a dot makes the data more humane and makes it easier for us to tie it back to our own experience of the world.</strong></p><p>The superpower of interactive datavis is this fluent switching between the macro-level of overarching patterns and the micro-level of raw anecdotes.</p><p>There are also certain types of charts that lend themselves more to a micro-macro approach. The more statistical types of charts like bar or line have a certain aggregation already built-in and make going back to the original data less obvious. Scatter plots are probably the prime example for a micro-macro compatible chart type, with their data-points-become-circles approach.</p><h3>Enriching anecdotes</h3><p>The micro-macro idea holds an additional promise: transgressing the restrictions of the visualization itself and providing an even richer picture.</p><p>When creating a data visualization, certain aspects of the data just inevitably disappear: some parts might not correspond to the patterns the designer wanted to shed a light on (don’t forget that even <a href="https://medium.com/@moritz_stefaner/well-formed-data-worlds-not-storieshere-is-the-video-of-my-talk-at-visualized-presenting-the-83d2da54c2d3">datavis designers are authors</a> as <a href="https://medium.com/u/f50f8c4bbcbd">Moritz Stefaner</a> argues). Aspects that couldn’t be fit into our meager two visual dimensions were dropped. Or, since <a href="https://medium.com/@giorgialupi/data-humanism-the-revolution-will-be-visualized-31486a30dbfb">data is primarily human-made</a>, the ones that were simply not quantifiable into neat digital numbers were removed.</p><p>Nobody can keep designers from making these aspects available through interaction, though!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tBzzuZt5gSwJwVg-cyGZtA.png" /><figcaption>Pudding: <a href="https://pudding.cool/2017/01/making-it-big/">The Unlikely Odds of Making It Big</a></figcaption></figure><p>Pudding’s <a href="https://pudding.cool/2017/01/making-it-big/">The Unlikely Odds of Making It Big</a> is one such example. The great piece does away with the “Everybody can make it big!” narrative, by example of New York bands.</p><p>When reading about successful bands, it’s all too common to hear about their humble origins and coming to the (utterly wrong) conclusion that playing in grimy bars is always just a phase and super-stardom is just around the corner. This again very effectively demonstrates the dangers of emphasizing the micro without mentioning the macro: by reading scores of articles about successful bands without ever being confronted with the reality of the vast majority of musicians, we’re inevitably biased towards overestimating the chances of success.</p><p><em>Making It Big</em> clearly shows the sad reality of only 21 out of the 7000 bands in the data ever playing at bigger venues.</p><p>This new context for the anecdotes is very powerful. But the piece also provides additional context lost in the visualization: the actual music. For all the bands in the data that made it big, there are audio samples available to get an idea of what type of music they’re playing. With music being so central to a band’s success (probably), this is a great example for enriching the anecdotes while keeping the context.</p><p>The ultimate step in enriching anecdotes and enabling micro-level inspection is showing the actual raw data.</p><p>This is something we did in <a href="http://selfiecity.net/">Selfiecity</a> — a project that looks at selfies as a cultural phenomenon through a data lens. One part of it was the <a href="http://selfiecity.net/selfiexploratory/"><em>Selfiexploratory</em></a> playground where you can explore the full data set of (currently) 3840 selfies:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AgqOB4OptHcV-sLCnMV3zg.png" /><figcaption>The Selfiexploratory in <a href="http://selfiecity.net/">Selfiecity</a></figcaption></figure><p>There’s no aggregation or reduction in complexity— except for shrinking the image resolution, the selfies at the bottom of the <em>Selfiexploratory </em>are the same ones we’ve based the overall analysis on. It would be hard to enrich those anecdotes even more.</p><p>Providing access to the raw data is much more than a gimmick: as <a href="https://www.vis4.net/blog/posts/in-defense-of-interactive-graphics/">Gregor Aisch argues in his excellent piece on interactive graphics</a> it can create trust through transparency. By making all the data available, the visualization becomes less of a black box and makes fact checking possible.</p><p>Since a large part of Selfiecity’s analyses was based on computer vision, we ourselves wanted to be sure that the algorithms that had extracted things like age, gender, pose, facial features and mood from the images had the right idea.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DfH6Dtiy6ZoXOqVYNdHKow.png" /></figure><p>And the Selfiexploratory makes this fact checking a painless process:</p><p>Filter for a certain aspect of the data (for example, head tilted to the left) and see if the resulting images make sense (try spotting the feline outlier in the screenshot to the left).</p><p>If you spend some time playing with the filters in this way it becomes clear that the computer vision algorithms aren’t perfect — but definitely good enough to reach some overarching conclusions about selfies.</p><p>Showing the raw images enables even more. Similar to Pudding’s <em>Making It Big</em>, there’s more to the data than can be expressed in numbers. When you’re thinking about selfies, aspects like head tilt or closed eyes might not be the first associations that pop into your head —a lot of people’s first thought (mine included) is probably <em>duck face</em>.</p><p>And despite there not being a reliable duck face algorithm available, having access to the raw data lets you do this analysis yourself on the micro-level: simply by browsing some pages of images with a filter for the usual suspects (<a href="http://selfiecity.net/selfiexploratory/?bar01=[8.333,20.833]&amp;bar02=[left]&amp;">young women</a>) you can make up your own mind.</p><h3>A micro-macro look on the world</h3><p>When trying to make sense of the world, all of us are overwhelmed by the wealth of data. Taking recourse in fluffy anecdotes heavily distorts our perception, while looking at statistics usually leaves us cold.</p><p>Data visualization (as part of <a href="https://medium.com/@giorgialupi/data-humanism-the-revolution-will-be-visualized-31486a30dbfb">data humanism</a> to use <a href="https://medium.com/u/2b468a91df0f">giorgia lupi</a>’s great term) can give us the best of both worlds: showing us relatable parts from the data’s micro-level while ensuring we understand how they fit into the overarching macro picture. Going as far as showing the raw data enables fact checking and increases trust.</p><p>And by providing aspects that go beyond the limits of computers and visualization we can enrich those anecdotes even further, thus dragging the data down from lofty analytical spheres and making sure it fits in with where it came from: our own world.</p><p><em>If you like this article, please </em>❤<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><p><em>Thanks to </em><a href="https://medium.com/u/f50f8c4bbcbd"><em>Moritz Stefaner</em></a><em> for suggesting the term ‘micro-macro’ and both him and </em><a href="https://medium.com/u/c8d36315cca4"><em>Alice Thudt</em></a><em> for great feedback to the draft.</em></p><p><em>For more on this topic read these excellent articles by fantastic people:<br></em><a href="https://medium.com/@giorgialupi/data-humanism-the-revolution-will-be-visualized-31486a30dbfb">Giorgia Lupi: Data Humanism, the Revolution will be Visualized.</a><br><a href="https://www.vis4.net/blog/posts/in-defense-of-interactive-graphics/">Gregor Aisch: In Defense of Interactive Graphics.</a></p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4d027e3bdc71" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The death of interactive infographics?]]></title>
            <link>https://medium.com/startup-grind/the-end-of-interactive-visualizations-52c585dcafcb?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/52c585dcafcb</guid>
            <category><![CDATA[talks]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[infographics]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[interaction]]></category>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Mon, 13 Mar 2017 10:28:26 GMT</pubDate>
            <atom:updated>2017-08-16T15:38:40.573Z</atom:updated>
            <content:encoded><![CDATA[<p><em>(This is a write-up of the talk I gave at </em><a href="http://www.inch-conference.com/"><em>INCH Munich</em></a><em> on March 11 )</em></p><p><em>(edit: </em><a href="https://www.vis4.net/blog/posts/in-defense-of-interactive-graphics/"><em>Gregor released a new blog post</em></a><em>, clarifying some of the aspects and made some great points on the benefits of interactivity)</em></p><p>Last year I was lucky enough to go to <a href="http://informationplusconference.com/">the Information+ conference in Vancouver</a> where <a href="https://driven-by-data.net/">Gregor Aisch</a>, who works at the New York Times, gave a talk about the publication’s graphics and their impact. And the <a href="https://vimeo.com/182590214">scary resumé of the talk</a> was: Barely anyone interacts with the New York Times’ graphics. The New York Times makes arguably some of the best interactives in the field, which made Gregor’s talk even more depressing. His number of only 10–15% of people clicking on buttons — even essential ones — tells you that interactives are a waste of time and money.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iPQjn0t4CV2BuNuQvBZ0_Q.png" /></figure><p>Gregor’s editor, <a href="https://github.com/archietse/malofiej-2016/blob/master/tse-malofiej-2016-slides.pdf">Archie Tse, talked about this earlier in the year</a> at the Malofiej conference, and turned this fact into some utterly depressing rules. One of them was, for example, “<strong>If you make a tooltip or rollover, assume no one will ever see it.</strong>”</p><p>85% of page visitors simply ignore them, missing out on information hidden behind interaction. On top of that, interactives are expensive to make — they have to work across devices, using trackpads and fingers. They’re error-prone and can tarnish the publication’s reception within their audience.</p><p>So why even bother?</p><p>I’ve been working on data visualization for almost ten years now, first as a PhD student in Munich, then as a researcher and now as a freelancer. And if I had to name the one aspect that most fascinates me, it’s their interactivity and the potential therein.</p><p>Of course, the power alone to compress complex datasets into approachable and even appealing graphics is fascinating. The craft of shaping data to perfectly fit into the interface of our visual processing systems. The sheer wonder of being able to squeeze thousands of data points into a picture most of us are intuitively able to grasp.</p><p>But interaction lets them do even more. If you think about visualizations as a mass medium, something made for huge audiences, interaction turns them into very personal tools.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NOaeSHzJfp7ScpvOde9yOQ.png" /></figure><p>If you’re doing interaction well, it can turn your visualization from a well-made newspaper that gives you the bullet points into a conversation almost. <strong>As if you were having a tête-à-tête with an expert on the data, patient enough to explain you everything.</strong></p><p>That’s the ideal, at least.</p><p>First of all — what do we mean by interaction?</p><p>Basically, interactive infographics describe visualization systems that have ways for the end user to change their attributes. This can be super-simple, such as changing the currently visible part of a map or tapping on a circle to get a detailed description of that data point. Interaction can also be more complex though, like drawing an example of the data you’re looking for and the machine finding it.</p><p>So, it’s somewhat of a catch-all term for clicking or tapping somewhere on that digital surface.</p><p>Doing a full definition would probably take a lot longer than the length of this talk, so for our purposes, let’s go with this simple one:</p><p><strong>Interaction in visualizations changes the lens on the data.</strong></p><p>This can mean to filter certain datapoints, select a different area of the data or even changing the type of visualization altogether. The important point is: an interactive visualization is no longer static and doesn’t represent a single view on the data. Interaction enables people to adjust a visualization to their own needs and ask it different questions.</p><p>Interaction does not have to happen with clicky-things on websites.</p><p>Interaction with an infographic on a high-resolution medium such as paper can be a powerful experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ksOoVV83UibSEvRvQp3YQw.png" /><figcaption><strong>Reebee Garofalo: </strong><a href="http://www.reebee.net/rock-genealogy"><strong>The Genealogy of Pop/Rock Music</strong></a></figcaption></figure><p>Edward Tufte likes to present the above chart in his workshops and let participants really drill into the data. Explore the chart not using your hands, but using your eyes. Focusing on this corner, then that. Building up an image of the data in your head. And even going so far as forming hypotheses in your mind and trying to confirm them by looking at the relevant section of the visualization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Dyfspu_17Ynik4kyVrN6Xw.png" /><figcaption>Accurat: <a href="https://medium.com/accurat-studio/the-architecture-of-a-data-visualization-470b807799b4#.xzayj27qx">The real Montalbano!</a> (for Corriere della sera)</figcaption></figure><p>The design studio <a href="https://medium.com/accurat-studio/the-architecture-of-a-data-visualization-470b807799b4">Accurat also did several brillant infographics</a> for newspaper ‘Corriere della sera’ and their Sunday supplement ‘La Lettura’. Highly detailed, complex graphics, with unusual ways of data encoding and presentation.</p><p>Since we now have a rough idea of what interaction can be — can we maybe find the criteria for successful interactive pieces? What are the requirements for those?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HuFZvUBdoKgpTBDtFEeuoQ.png" /></figure><p>While I don’t have an algorithm for that (sorry), maybe we can get some idea. My main impression however is: <strong>we data vis people spend too much time thinking about the interactions themselves and less about the audience who is supposed to be using them</strong>.</p><p>And then, well, they might end up NOT using them.</p><p>We might be super excited about some clever interaction trick, but maybe we’ve already lost our audience before they even saw the graphic. So, as always in design, be aware of your assumptions and your personal bias.</p><p>Going back to the New York Times, their pieces are made for a very specific situation: dealing with news in this realm of data journalism means that speed is everything. Content has to be produced quickly and timely. It’s extremely hard to do an interactive graphic for something unexpected and some of their pieces have weeks of work put into them.</p><p>Speed is also the most important aspect on the other end of the chain: how do you consume your New York Times articles? Are you sitting down for half an hour with your iPad in hand after breakfast, carefully scouring the Times’ website, making notes? Or is it more that someone on Twitter or Facebook shared an article and clicking on that link takes the same amount of time as clicking on the little ‘X’ button in the top-right?</p><h3>TIME</h3><p>Being able to really appreciate something like a visualization or even interacting with it needs TIME. Which gives us a clue for one of the requirements for successful interactive infographics: you have to be aware of the audience’s context, if they’re actually in the mood for in-depth data exploration.</p><p>Think back to the image of the newspaper versus the data expert. If you’re sending that expert to some random bus stop to talk to people who have about two minutes before their buses leave you can imagine how much they’ll get out of it. And how many datapoints they will miss.</p><p>That’s why I really like doing interactive installations.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F118247767&amp;url=https%3A%2F%2Fvimeo.com%2F118247767&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F505231997_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/475d76d0111433885166c5b384735352/href">https://medium.com/media/475d76d0111433885166c5b384735352/href</a></iframe><p>End of 2014 I was working with Moritz Stefaner, Lev Manovich, Daniel Goddemeyer and a couple other highly talented people on a project called <a href="http://on-broadway.nyc/">ON BROADWAY</a>, something heavy on the interaction side, with which you could browse along the Broadway in New York and its associated digital data.</p><p>Last year, Moritz and I had another installation project for the Ecole Polytechnique Federale de Lausanne, the university in Lausanne, Switzerland. For their new wing they wanted to have a visualization representing the wealth of data that they had collected about their university — aspects of their teaching as well as research capabilities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cqmHTmsUXAMr_cEf5pZG9A.png" /></figure><p>The end result, called THE DATA MONOLITH, is a pretty massive thing. It’s over 4 meters tall, contains two touchscreens, one 4K display and a back projection at the very top. The touchscreens work as remote controls for the big screen, with which you can change which part of the data you’re looking at.</p><p>The visualization is organized along three perspectives on the data — PEOPLE, TOPICS and IMPACT.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F194404753&amp;url=https%3A%2F%2Fvimeo.com%2F194404753&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F606437563_1280.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/16d8f5f5cd8cfc4055849d2db8635433/href">https://medium.com/media/16d8f5f5cd8cfc4055849d2db8635433/href</a></iframe><p>PEOPLE is about students, teachers and researchers in the university. Each person who has anything to do with the university becomes a little bubble and re-organizes themselves based on the current visualization. TOPICS presents the network of researchers and research topics — almost like a neural network that maps out the university’s research interests and their corresponding brain power. Finally, there’s the IMPACT view on the data, that shows the impact that EPFL’s researchers have on the science landscape and their collaborations world-wide.</p><p>This massive dataset with its various perspectives would be too hard to boil down to specific messages — interaction is absolutely required.</p><p>Fortunately — and that’s the great thing about developing interactive installations — in such contexts people have the time. Most people go to museums or such installations in general to be entertained and learn something. They don’t mind “working” their way through the data with interaction. If presented in the right way, this interaction can even be part of the fun.</p><h3>GOALS</h3><p>In addition to time, successful interaction also has to take into account the goals of the viewer. And these can be pretty different.</p><p>In the EPFL Monolith case, students and teachers at the university are a big part of the audience. They might have fun trying to find themselves in the PEOPLE visualization. University visitors, though, might be more interested in getting an overall picture of the university climate and personnel. And datavis people might just be interested in breaking stuff ;)</p><p>Interaction, if done well, caters to all these groups. They’re able to shape the data based on their own interests. Like being able to ask the data expert specific questions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IKluQccty2BiwQuLG84hfA.png" /></figure><p>Talking about goals, these goals that an audience has can also be of a more practical nature. It doesn’t have to be ‘I want to learn about X’. It can also be ‘I want to do Y, show me how’. I like to call this idea <strong>Visualization As Interface</strong>. A way to access the data in a more humane way, a middle ground between untraceable black box machine learning answers and spreadsheets.</p><p>One project from last year where I went this route together with Daniel Goddemeyer was <a href="http://subspotting.nyc/main/index.html">SUBSPOTTING</a>.</p><p>If you’ve ever been to New York and used the subway, you might have noticed that, officially, there’s no cellphone reception on the subway. But if you’re aware of where to look, you can actually find reception on the subway here and there, there are pockets of connectivity, with antennas from aboveground being able to penetrate down to the subway tunnels. It’s just a little hard to tell where.</p><p>That’s where Subspotting comes in. For the project, Daniel and I were interested in the how extensive these hidden cellphone networks actually are. So we started looking for the corresponding datasets … but couldn’t find any. And finally decided to collect this data ourselves.</p><p>We had a case with four iPhones (for the four carriers) in it as a logging device. So, it was a super guerilla approach as you can see. And of course we needed someone to actually bring that case along the over 1000kms of subway network found in New York. So, Daniel went on his merry way, since I wasn’t in the city… no, actually we hired a task rabbit to do it for us who was pretty excited about getting paid to ride the subway, read a book and press a button at every stop.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*elgueZ49bbYtxlHRPOvOUA.png" /></figure><p>The end result was a pretty massive dataset of the cellphone reception for each of the 25 lines. After some data cleaning, we turned to paper as our favorite high-resolution visualization medium and created two posters, one highlighting the geographical aspects, the other the subway lines themselves. They’re great for exploring the data, following the lines and checking out specific carriers.</p><p>But of course they’re not much use for actual people riding the subway and looking for a cellphone signal.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yvbMUEEXBRQnGORlPCsZuA.png" /></figure><p>For that, we created an additional app, called Subspotting. And here’s where the Visualization As Interface aspect comes in. Users can draw a more concrete benefit from having access to this data, by finding out where on their daily commute they can get cell phone reception and plan accordingly.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F208106577&amp;url=https%3A%2F%2Fvimeo.com%2F208106577&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F623342514_295x166.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=vimeo" width="356" height="640" frameborder="0" scrolling="no"><a href="https://medium.com/media/14f6013dc54aa560a856052198c1ddf1/href">https://medium.com/media/14f6013dc54aa560a856052198c1ddf1/href</a></iframe><p>The data is organized as a series of cards, showing each subway line as overview or in detail. Cell phone reception strength is encoded as bar charts mapped to each part of the line. Switching lines happens by swiping left or right, moving to different parts of the line works through scrolling and you can filter for different carriers. This way, interaction enables quick access to the complex dataset and visualization makes the data understandable.</p><p>Subspotting shows that interaction can be successful if people are actually getting something out of it. When it comes to simply teaching them about a certain dataset and its aspects, the immediate benefit might not be as clear cut. If they want to find out where exactly they will have cellphone access on their morning commute, an interactive visualization is probably the best way to give them quick access.</p><p>In which other ways than respecting their TIME and their GOALS can we make sure that people will actually be interacting with our visualization pieces.</p><p>Again: be aware of your assumptions. Especially us datavis nerds often have the dangerous assumption that everyone is as crazy about datavis as we are. I mean, just think back to my grandiose introduction of visualization in the beginning here.</p><p>But, of course, that’s wrong. And that’s healthy! It’s a good thing not everyone is fighting about pie charts on Twitter!</p><p>So, given that most people aren’t that interested in visualization (let’s be honest here), we have to find a way to make them CARE about it. Because if they don’t care, they won’t look at it, let alone interact with it.</p><p>And even before they start caring about it, you want to make sure that you’re not closing the door right in their face. Don’t throw bar charts at them until they close that browser tab.</p><h3>ONBOARDING</h3><p>One aspect of that is the new fangled term “user onboarding” (how to get people <em>on board</em>). Basically, what happens in the first moments after a person opens your app or website or even newspaper page. This can shape the rest of people’s experience with your infographic. And of course also if they’re quickly frustrated and just close it and go do something else.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4l1a29x2zBU_c1rKPMWM4g.png" /></figure><p>Onboarding is something that even happens in print. If we go back to Accurat’s La Lettura visualizations, each of them has a short introductory paragraph on the top left (where you would naturally start reading with a Western background) and then a section titled ‘How to read it?’. That gives you a clear idea of how to work with the graphic before diving into the details.</p><p>Other great examples for tutorials and guided exploration of visualizations are <a href="http://ncase.me/">Nicky Case</a>’s projects — if you haven’t played with ‘<a href="http://ncase.me/polygons/">Parable of the polygons</a>’, for example, you should definitely check it out. A clean text explanation of what this project is about plus interspersed interactive elements for exploration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1Ja5Iu6TL-923xv-JNoFew.png" /><figcaption>Super-thorough research</figcaption></figure><p>…. and those are basically all great examples for user onboarding in visualization that I’ve found despite my super-thorough research.</p><p>But seriously, it’s something that I and the visualization community really could become better at.</p><p>Good onboarding is the data expert giving a short introduction to the data and what they know about it, instead of just looking at you… blankly … staring into your soul…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tK21Ex6UOhItYNrCx-tmRA.png" /></figure><p>But beyond simply teaching people how to read and understand your visualization, respecting their time and interests can go even further.</p><h3>CARE</h3><p>You want to make sure that they CARE about your visualization. Now, how to do that?</p><p>If we’re really _really_ honest about ourselves, what’s the one thing that’s endlessly fascinating to us and we could talk about forever?</p><p>Right. Ourselves.</p><p>So one simple trick to draw people into a visualization is by appealing to this inherent narcissism. Quickly answer the question ‘Why should I care. What is it to me? Why would I give any expletive about the situation in so-and-so?’</p><p>If you provide them with an answer right away, telling them what this dataset could mean to them, they might actually listen.</p><p>This is a pattern that you often see in visualizations:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lc_M1fG8jgfxRfRWCy34xA.png" /><figcaption>BBC: <a href="http://www.bbc.com/news/world-15391515">The world at seven billion</a></figcaption></figure><p>In <a href="http://www.bbc.com/news/world-15391515">this piece by the BBC</a> about humanity crossing the number of 7 billion people (those were the days), they ask you a few questions to arrive at your very unique and personal place in the world.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VTCymlt800m4sF2sDP2-3Q.png" /><figcaption>New York Times: <a href="https://www.nytimes.com/2015/01/25/opinion/sunday/the-secrets-of-street-names-and-home-values.html">The Secrets of Street Names and Home Values</a></figcaption></figure><p>Here’s <a href="https://www.nytimes.com/2015/01/25/opinion/sunday/the-secrets-of-street-names-and-home-values.html">another one by the New York Times</a>, that shows you how much less your house is worth because it’s built on Main Street instead of Ocean Boulevard. Again, you can enter your own street name and explore how that changes things.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7LCJSlmeUNTIwDLnPwoCOA.png" /></figure><p>But instead of directly asking people you can also be more subtle about it. An easy way to get access to this type of contextual information is through the various sensors hidden in our smartphones and laptops. You could almost call that ‘passive interaction’.</p><p>Moritz Stefaner and I did a project for the OECD in 2014 called “Regional Well-Being”. The OECD is spending a lot of time capturing factors of well-being in their member countries and with this project, they decided to dive from a national to a regional level. So it was no longer about the quality of life in Germany, but in Bavaria versus Berlin and so on.</p><p>This of course also made the dataset much more complicated — while their Better Life Index contains 11 dimensions for 35 countries, the Regional Well-Being data contains 11 dimensions for 395 regions! Since this can make the data pretty overwhelming at first, Moritz and I decided to start with something that our audience could relate to — the quality of life in their own region.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F208108980&amp;url=https%3A%2F%2Fvimeo.com%2F208108980&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F623346382_1280.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/ce519839c48b0830dca8b7760a124ebd/href">https://medium.com/media/ce519839c48b0830dca8b7760a124ebd/href</a></iframe><p>So, when you open <a href="http://www.oecdregionalwellbeing.org">www.oecdregionalwellbeing.org</a> your browser asks you to give them your current location (browsers can do that, and if you’re not comfortable with it you can also select it from a list). And the visualization then starts at this location, so you can look at how life is around you. From there as a starting point, you can branch out in your exploration — either looking at spatially close regions or regions that are similar to your own.</p><p>Or even go somewhere completely different. Starting in your own region lures you into the visualization and gives you a reason to actually care about it.</p><p>Ok, after these ideas when it comes to interaction, going back to our original question:</p><p>Are interactive infographics dead?</p><p>My admittedly super-lame answer is: it depends (as always).<br> <br> It very much depends on your audience and their context. Maybe the heyday of interaction actually is over this period of total excitement about all the things that have suddenly become possible. The field of data visualization is maturing and that also always means cutting away the wild growth that has sprung from all the original possibilities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qC1rpC129B2szea9p7gP4Q.png" /></figure><p>Think about your audience’s TIME — do they have 30 seconds or 15 minutes?</p><p>Also think about their GOALS — what can they get out of your visualization? Is it maybe more than just fun factoids? Can they use the visualization as an interface helping them with something specific?</p><p>Finally, think about what they CARE about — guide them into the visualization, make your visualization about themselves and show them why the data is relevant to them. And if that’s not possible, maybe intricate interaction won’t be required anyway…</p><p>Once you’re done thinking about it, add your interaction or leave it. Just doing it for interactivity’s sake doesn’t help anyone. Don’t force it on everybody just because you can.</p><p><em>If you like this article, please </em>❤/👏<em> or share it! For more like this, </em><a href="https://medium.com/@dominikus"><em>follow me on Medium</em></a><em> or </em><a href="https://twitter.com/dominikus"><em>follow me on Twitter</em></a><em> for general datavis ranting.</em></p><h4>About me</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/267/1*7uh7-db2H3z03v2ncWhNpQ.png" /><figcaption>Me, datavis-ing AF</figcaption></figure><p><em>I’m Dr. Dominikus Baur, an award-winning datavis designer and developer. You can find the </em><a href="https://do.minik.us/#projects"><em>projects I’m most proud of</em></a><em> and more on my website: </em><a href="https://do.minik.us"><em>https://do.minik.us</em></a><em>.</em></p><p><em>You have a fascinating project to work on? You want to turn these ideas into reality? </em><a href="mailto:do@minik.us"><em>Let me know</em></a><em>!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52c585dcafcb" width="1" height="1" alt=""><hr><p><a href="https://medium.com/startup-grind/the-end-of-interactive-visualizations-52c585dcafcb">The death of interactive infographics?</a> was originally published in <a href="https://medium.com/startup-grind">Startup Grind</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Everything on the internet is twice as long as it has to be]]></title>
            <link>https://medium.com/i-m-h-o/everything-on-the-internet-is-twice-as-long-as-it-has-to-be-1a51c63611a8?source=rss-5faacc2a4dd3------2</link>
            <guid isPermaLink="false">https://medium.com/p/1a51c63611a8</guid>
            <dc:creator><![CDATA[Dominikus Baur]]></dc:creator>
            <pubDate>Mon, 03 Jun 2013 23:12:37 GMT</pubDate>
            <atom:updated>2013-06-03T23:12:37.850Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*feHbyy_mwPQZ3ata.jpeg" /></figure><h4>Even this headline.</h4><p>Recently, I’ve stumbled upon this interesting article about evolution or atheism or something (I don’t quite remember what it was), but I do remember that it tugged at my heartstrings in just the right way. The first few sentences just brought the point beautifully across, a point which my mind - thoroughly stressed-out by the neverending roar of social media updates - just gladly accepted. Yes, these few sentences rang true to my predispositions and were so palatable as to provide me with some relief from the constant onslaught of rings and popups and reminders and all the other noise and let me focus - a rare experience these days. </p><p>Which is why I instantly decided to share it.</p><p>Once that thought had gotten hold of my mind I couldn’t really focus anymore on the article itself. Yes, it went on like that, something about dinosaurs and the milky way (the author obviously knew his awesome-o-pedia), but my brain was already somewhere else. On the long end with hundreds of likes and shares and friendly comments touting me as a sharer of wisdom, an amazing human being, somebody whose existence is acknowledged. So once I hit a really nice quote at the beginning of the third paragraph or so I just copypasted that to Facebook, put the url below it, and added some deep comment. The first sentence of the third paragraph - that is as far as I got in an article I found so enticing that I decided to share it with all these friends, co-workers and conference folk I’d barely talked to.</p><p>And this is symptomatic for all of the internet: everything on the internet is twice as long as it has to be. When was the last time you actually finished reading something on the web? Thoroughly follow an author’s argument without skipping to the conclusion paragraph? How long is your Instapaper reading queue?</p><p>Even the web’s most peculiar result, Twitter, with its laughable 140-character posts, follows this rule: while the first half of a tweet contains actual information, the second is invariably cluttered with snark, emoticons, links or that bane of the internet - hashtags. My rule of thumb is: in 90% of all cases you’re fine with just reading the first half of everything you find on the internet - even tweets.</p><p>Because, seriously, would any author ever be so cruel as to start their post with some well-meaning, insightful observation before diving into utterly destroying that argument in the latter half of their article? Thus fooling all people who had just (seemingly) read <em>enough</em> to get the gist of it? And share it to all their Facebook friends?</p><p>Or just repeat the same sentence over and over again ad nauseam? Repeat the same sentence over and over again ad nauseam? Repeat the same sentence over and over again ad nauseam? While interspersing some new words to make it look different? But still repeat the same sentence over and over again ad nauseam? Repeat the same sentence over and over again ad nauseam? Repeat the same sentence over and over again ad nauseam?</p><p>That could never happen.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1a51c63611a8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/i-m-h-o/everything-on-the-internet-is-twice-as-long-as-it-has-to-be-1a51c63611a8">Everything on the internet is twice as long as it has to be</a> was originally published in <a href="https://medium.com/i-m-h-o">I. M. H. O.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>