
Moi Helsinki: In Favour of Design for Public Use*
Design for Public Use & Localisation of Social Media
* Here and after I will refer to the citizens as the public. While the agents that are commonly known as “the public sector” will be pointed to as the state administration or the government.
CONTENTS
Summary
● In Favour of Design for Public Use
2/ Media Design For Public
3/ From Knowledge to Knowing
4/ Moi Helsinki: Project Work
Discussion & Further Work
You may download the full text [PDF] for free.
It is March the 4th, 2016, the meme of today in the English-speaking web is “Donald Trump defending his penis size at the Republican debate”. A populist bullet flying through the news feeds was, probably, carefully crafted at the backstage of the Republican Party PR office. On the video, the audience meets a punchline with cheers. It is exactly one month before four copies of this work shall land on the table of my university department. Graduation brings one closer to being an adult. An adult citizen should be interested in acting politically and participate in building the state. I find it difficult to be interested in the staged performance that is broadcasted to us today. On the other hand, the digital infrastructure today provides an opening for engagement that has never been possible before.
In one of his YouTube videos, Hank Green tells a story of “Mass Incarceration in the US” (VlogBrothers, 2014). He begins explaining how the video came to be. A company called Visually contacted him asking: “If you could do a high quality animated video about any issue in the world, what would you choose?”. Luckily the choice fell not onto the political meme trending at that moment, but rather onto an actual problem that the United States are going through. Over one and a half million people watched the final video, making it one of the top 50 on the channel. While the most popular video on the VlogBrothers channel is still about mating habits of giraffes, the fourth in the list is “Why Are American Health Care Costs So High?”. The simplicity of YouTube broadcasting made it possible for the Green brothers to focus on constructing the stories based on the facts and propagating education on the topics that are valid for their community. In the case of the VlogBrothers community is not an abstraction. Through the hub known as Nerdfighteria the Green brothers encourage the followers to unite online. Generating the community this way allows to focus on the knowledge rather than spin around the VlogBrothers’ personalities (Wikipedia, 2016a).

The two episodes described above are representative of the gap that exists in between the “actual politics“ and the politics that take place behind the scenes of the election process. This work is an exploration on addressing such gaps. Figure 1 introduces the names of the active society agents that will be referred to further in the writing. The public is a group of citizens and residents, which may consist of single individuals, families or community groups. The public elects the state of administrative lawmakers that are called to tailor the regime working in the best interest of the public. The market is composed of the private companies that are serving the interests of both the government and the public. The market provides services in return for material profit. This simplification of the social model further allows us to talk about the complexities of knowledge aggregation in social networks.
The academic discussion regarding the role of technology in shaping society (and vice versa) has been prominent for the last two decades. The work of Castells (2006), for example, points to the role of networking opportunities brought by contemporary technologies and how this networking opportunity shapes current social dialog. In his work, Castells highlights both sides of this change: liberation facilitated by technology and oppressive response to this liberation. Following this track of discussion, other scholars have noticed how organisational and economic structures of the emerging society were affecting everybody, but are not inclusive towards everybody. This inequality eventually spreads from networked economy to networked access to knowledge and information (Gerloff , 2006).
Networked data has its hidden risks, like any tool that may come into the wrong hands. While talking about it Newton Lee has sounded, probably, one of the most severe alerts. Lee (2014) speaks of the Total Information Awareness (TIA) program, carried on by the US government and its legacy in nowadays social networks. Figure 2 builds upon the earlier model of the social infrastructure to illustrate that conflict of opinions (Lee, 2014).

The careless treatment of personal privacy that once got the media to rage against TIA is present in social networks. Earlier, the public was suggested national security in exchange for the cuts on personal privacy. Today using extended network services come with the same price. Facebook is one of the digital services whose position on this matter is known. Mark Zuckerberg famously stated that “social norm is just something that has evolved over time” and it is the time to reconsider the boundaries of privacy (Johnson and Vegas, 2010).
There are other researchers, who voice pragmatic scepticism mentioning that tools and skills of digital participation are not commonly present among the general audience (Kennedy and Moss, 2015). Today, scholars oft en point out the need for the government to embrace inclusive practices and collaborative structures (Design Commission, 2014; Mossberger et al., 2013; Oliveira and Welch, 2013). For example, Russon Gilman (2015) calls for building environments that allow meaningful participation of the citizens in shaping the agenda of the state administration. In a similar way as the Green brothers’ initiatives do, Russon Gilman notices the importance of creating “opportunities for citizens to self-organize to improve governance outcomes“. Driving motivation for such self-initiated action among the citizens is a matter of particular interest. There are academic studies that show that the motivations for the public to participate in government-driven innovation are similar to these of open source: “fun and enjoyment alongside with intellectual challenge and status” (Juell-Skielse et al., 2014). This observation means that public participation requires mechanisms of support to facilitate citizen engagement. Moreover, previous attempts of governments to become digitally available for the general audience have oft en failed to provide data in an accessible manner (Evans and Campos, 2013).

Media design for public use which I argue for with this work is a combination of communication strategy and support system provision. Figure 3 is illustrating the vision of this design objective. The communication strategy must evolve beyond serving the information assets. The data on the scheme is a shared resource that is built and executed cooperatively by the three core agent groups. The proposed goal is to engage the public in extracting value from the data. While the market keeps providing digital services to the audience, it is in the public’s best interest not to associate this support system with the profit it generates. Therefore, the means necessary for active engagement (support system) should be responsibility of the state. This environment may look simple on the graph, with only a few focal points. However, there are complicated relationship ties in between the agents in this social ecosystem. Social media has reinforced communication links within the public to the extent that has never been possible before Castells (2006).
At the same time, establishing a connection with the state seems impossible for many citizens (Wisniewski, 2015). On top of this complexity, social technologies that drive people together generate data, which eventually can be turned a unique kind of knowledge. However, private companies on the market own this innovation, and the fruit of the resulting knowledge is locked from the public (Balkan, 2014; Kennedy and Moss, 2015). The following paragraphs open up the context of the overload in which we may find ourselves today.
“IF YOU DON’T APPEAR UNTIL PAGE FIVE OF THE GOOGLE SEARCH RESULTS FOR A SPECIFIC TERM, YOU MIGHT AS WELL NOT EXIST AT ALL.” (INGRAM, 2014)
Age of Overload
Google ID is more eff ective and useful in daily life than the state ID (Bratton, 2014). This is a strong and simple claim, that can be aspirational in its simplicity. Tim Wisniewski (2015) notes that the illusion of government being a locked monolith leads to high disengagement among voters and civilians. Immediate interface reactions seem to disperse civic activities in silos of theirs social network connections. Parliament representation may seem delusional as aft er one has voted there is little opportunity to influence the process (Russon Gilman, 2015). At the same time, establishing a connection with the state seems impossible for many citizens (Wisniewski, 2015). On top of this complexity, social technologies that drive people together generate data, which eventually can be turned a unique kind of knowledge. However, private companies on the market own this innovation, and the fruit of the resulting knowledge is locked from the public (Balkan, 2014; Kennedy and Moss, 2015). The following paragraphs open up the context of the overload in which we may find ourselves today. In Brazil, where participation in the election is compulsory for all the citizens, the project Non cancellable Newsletter (newsletterincancelavel.com.br) reflects upon visibility of politicians’ work as a succession of acts of individuals. Aft er the state administration is elected citizens are informed about the work of their candidates only occasionally. Noncancellable Newsletter in line with many similar initiatives is aiming to illustrate the opening for political engagement of general audience that the web has to off er. The authors of the project question whether the publics themselves pay enough attention to the politicians that they have granted with the vote. The service allows a voter to subscribe to news about the chosen group of political figures. A generative news block is sent to the user during the four years of the administration staying in power. There is no way to unsubscribe from this newsletter. The project’s creative writer, Tiago Pereira (2015), described the context of the work in his portfolio the following way: “Elections in a country where short-term memory and corruption dominate politics”. The project has been released with the slogan: “Politicians, you are being watched”. Castells (2006) points out that in the case of the Internet “the first thousands of users, were, to a large extent, the producers of the technology.” When the World Wide Web was being envisioned its visioners aimed to unify the data and build a common stack of it, for everyone to access and edit it according to their needs. This tendency, however, faded out for a while until later, when blogs and wikis reestablished it to some extent (CERN, 2008). The decades that the web was accessible led to accumulating collective intellectual effort online. Today, finding a needle in a haystack can be an easier task, than finding online a single bit of information that you need. Artificial intelligence and technologies like artificial neural network are showing promising advancement lately (Pannu, 2015). It is likely that open searching requests in native language would be possible soon. Development of knowledge management on the technology market is without a doubt good news. However, an aspect of concern at this moment in time is the proprietary nature of this knowledge accumulation. We are currently at the state of digital oligopoly, domination of a few firms over the market (Ingram, 2014). Companies like Google, Facebook and Amazon are directing vast amounts of web traffic. Google claims (2011) to aim at the “focus” and “effortlessness” in its experience design. Company interprets filtering the data as “getting all the other clutter out of your way”. This objective does not change the fact that Google’s algorithm is proprietary and even the specialists can not tell why do some of the pages drop their position in the search output (Ingram, 2014). This literally means that users can never fully engage with what the system serves as the search result.
“A POST-DIGITAL SENSIBILITY OF MUSIC COMES WITH AN INHERENT QUESTIONING OF THE OWNERSHIP OF THE SPACES WHERE MUSIC TAKES PLACE.” (FLEISCHER, 2013)
In his article for e-flux Rasmus Fleischer (2013) develops the thoughts he earlier announced in his Post-digital Manifesto (2009). The article talks about the music production as a political act in the world that has been shaken by technology and new efficient ways of mass media in particular. The understanding of positions of power has shifted in both politics and music production and the path has opened for many opinions to pour out as well as for many musicians to digest existing musical footage and use available channels. The ability of many to express themselves fully is leading to series of “local dictatorships” emerging in parallel next to each other. Listeners (be it the political or music performance) are left to transition free in between these dictatorships. Fleischer concludes stating that easy access to music copies is significantly lowering the production threshold for a wider audience. Earlier he claims: One of the outcomes of the disengagement with data and letting the machine process it for us is the opening for algorithmic crimes (Goodman, 2016). ‘Black Box’ algorithms are present in all aspects of our lives. Such algorithms often drive major decisions in market and statistics. A notable example has occurred in 2013 when on April the 13th the Twitter account of Associated Press was hacked (Foster, 2013). A note said: “Breaking: Two Explosions in the White House and Barack Obama is injured.” US market instantaneously lost billions in value. The loss was temporary and the value was regained after the alert was found untrue. The Syrian Electronic Army claimed the responsibility for this attack. The target of the attack was not as much the Twitter feed of the trusted agency itself, as the bots and the algorithms running the market on the speed that no human could manually control (Goodman, 2016). There is a multitude of factors that make things valuable for us. These things become the drivers of the individual’s action. Everyday experiences we leave are deeply woven with how we see the world around. Klaus Krippendorff (2005) in his writings has described this awakening of meaningdriven artefact design as the “semantic turn”. Since early years on we learn to interpret objects, events and behaviours in relation to the context. We learn reading from left to right and later some of us relearn in order to master a foreign language. Krippendorff opens up describing the traditional interface of an artefact as the one that is meant to be learned. Nothing in the visual shape of the bus tells the user that it can take him from the point A to the point B, this is the knowledge we have to obtain from the map. When it comes to learning the news concepts, traditional design relies on the user’s patience. Semantic approach to design, that Krippendorff advocates, begins from the user’s experience of the object. The way the interface should be modeled, is the way that helps the individual in the most efficient use of an artefact. In his writing Krippendorff does not limit the definition of the artefact. A service can be as much of an artefact as a coffee mug. Seen from these prospective design of communication models deserves the semantic turn of its own. The ways in which we present and read everyday information are influencing the quality and intensity of our discussions (Galloway, 2012). Noncancellable Newsletter is an example of the work that is targeted to change the level of discussion among the public. The state is often treating modern communication technologies in an utilitarian manner, engaging with it largely in the share-only manner (Castells, 2006). In his work on the networked society Castells (2006) points to the change in communication that has already taken place and the environment is set for the active public engagement. He claims that talking of this change as futuristic is counter-productive and argues the state is missing on the full potential of communication technology. Another futuristic vision popular today is The Hunger Games dystopia depicting a society with a death match broadcast annually. This metaphor is every bit as real as the one Castells defends, talking of the change in public communication dynamics. Much of our discourse is circling around the default definition of a “single winner”, while the rest is the loser. In her book The Tyranny of Choice (2011) Renata Salecl paints a picture of anxiety that the fear of imperfect choice drives us into. She describes how the desire to buy “the perfect cheese” combined with the pressure of the context brings a person to devastation. Making wrong choices is nearly worse than not making any choice at all. In day to day experience there are shades of grey to this struggle. When the state and the market have to react on the technology advancement, companies and governments face multiple choices considering further development (Goldstein et al., 2013). Decisions at this level are most often linked back to semantics dominating in the given society at the given moment (Krippendorff, 2005; Reilly, 2014) Subjectivity of these decisions can and should be challenged if the aim is to represent the public in a just manner (Visvanathan, 2009).
Semantic Order
For the last years, young veterans returning from the service overseas have faced unemployment in the US. In their report for the Center for American Progress Chopra and Gurwitz (2014) mention that “as of August 2014, nearly 15 percent of young veterans ages 20 to 24 were unemployed — a rate 4.2 percentage points higher than their nonveteran counterparts.” This gap is growing because the skills of the veterans are hard for employers to translate outside the military context. The resolution of this problem requires multiple changes. In his interview (2014) Aneesh Chopra mentions that while serving as a CTO of the US he has requested the consultation from a specialists in media and technology on various questions. Recommended solution to this problem aggregated many changes in the treatment of data online. Instead of investment in new IT system, creating a simple listing of the jobs suitable for the veterans, an open data collaboration has been initiated. The suggested objective was to organize a “job posting” schema for interested employers, which would allow an employer to signal quickly availability for the cause. The White House administration has gathered the leaders of Schema.org (a network including Google, Yahoo, and Microsoft) suggesting the implementation of the common meta-tag (Chopra and Gurwitz, 2014). The launch of the initiative rapidly tagged over 500 000 available jobs with suggested microdata.
“WE EXIST AT THE INTERSECTION OF TECHNOLOGY AND SOCIAL ISSUES.” MARK ZUCKERBERG (LEE, 2014)
In his late prose Anton Chekhov was building a scene of the idea, rather than leading directly towards it (Shapiro, 2015). The reader was left to dig for the sense and own the result. The meta-level signs in the context of the play were serving as interpretation guidelines. Similar thinking lays in the definition of the semantic web as a network of data linked together through the meaning. This pre-scripted association allows a machine or a person to navigate through the unending chain of the database (W3C, 2013). The original idea did not mean that a machine would comprehend the data. The semantic web was supposed to allow intelligent processing of data. The team developing the concept continues the work. However, there are voices in the field arguing that the idea of the semantic web has failed to deliver on its promise (Shadbolt et al., 2006). Co-founder of Wikipedia, Jimmy Wales, discussing this topic has been pointing out that the system has not come fully equipped with the tools ready for implementation (Big Think, 2012). Wales noted that in his opinion keyword search is doing the decent job for data retrieval with no need for the machine to analyse the content of the page. The majority may agree with Wales, who finds the keyword search legible. At the same time the big players of the web keep pushing the concept further. Google has taken the first steps moving the direction “from being an information engine to becoming a knowledge engine” (Google, 2012) Google introduced the Knowledge Graph as both, the method and the growing database proprietary to the system. Other large searching engines like Microsoft Bing and Yahoo! are known to be working on similar projects. Much like in the case of the semantic web, a Google Knowledge Graph interprets some of the most popular searching queries as nodes of larger data networks. Structured information related to these nodes is present for the user on the sidebar next to search results. For example, the request “Da Vinci” will return the list of the artist’s work along with the list of the other painters of the era, that user might not have discovered yet. Facebook has started experimenting with a similar approach a year later, releasing Facebook Graph Search in collaboration with Microsoft Bing in 2013 (Facebook, 2013). This update has allowed users to type in searching requests in natural language (e.g. “my posts from the last year”). The way to these changes was paved by the Facebook Platform launched in 2010 (Fang, 2011). This platform utilized the model of representation that in the context of the Internet became known as the social graph (Wikipedia, 2016b). The model of the Facebook Graph Search has derived from the graph theory and used mathematical analysis rather than relational representation to generate the mapping of the network. Mark Zuckerberg, founder of Facebook, had announced that one of Facebook’s objectives is to expand the use of this network outside the project (Farber, 2007). The Facebook Graph API is a primary way for developers to get data in and out of the platform (Facebook, 2016a). It is a low-level HTTP-based API, which returns information from the social graphs accumulated by Facebook. With close to zero development effort, static blocks of information can be extracted through the Graph API Explorer. The system stores information in the “objects”. This model bundles together data assets as attributes of a single entity (e.g. “user” object must have an id stored as a number, the first and the second name stored as individual text strings) (Facebook, 2016a).
Objective Web
“Wrangling APIs, scraping, and analysing big swathes of data is a skill set generally restricted to those with a computational background.” Boyd and Crawford (Kennedy and Moss, 2015)
A general user of the Facebook website is not aware of that her data is being accessible in such systematic ways like this. Moreover Facebook Graph search, which could have made it evident, has failed as a product (Stone and Frier, 2014). Facebook’s objective to open up data to the developers does not necessarily spread onto their operation with the users. Algorithms of the system remain closed (Lee, 2014). The system makes “subjective” choices based on the mathematical procedures, that are not clear for the general public. This in turns makes the final data output of the official Facebook website less usable for practical purposes of the general audience. Other systems are aiming for exclusion or limitation of subjective factors from data presentation (DeLaCruz and Claveria, 2007). For example, one of the largest online dating systems, OkCupid, explains their matching algorithms (Wilf, 2012) to the users in public videos (Rudder et al., 2013; Wilf, 2012). One may argue that this information is not targeted at OkCupid’s audience and even if it was there is no practical implication of this knowledge. Yet, knowing why the system returns you particular data, brings up a wider vision about how we get our data served and what we can change to influence it. The UI aspect of this evolution can still be pushed further to make the final user aware of the flexibility of this type of search. An example of such guides built into the UI can be seen today on the YouTube “Recommended” feed (Google Support, 2016a). Registered users are getting their history saved and processed by the website. Previous search queries are being suggested for further exploration in a single feed on the top of the user’s landing page. If a user chooses to reject some of the content, there is an option to fine-tune the suggestions by deleting the stored “theme” from the list altogether. However, the single location for suggestion criterias still cannot be found on the website. The closest hint to this is the log of the search history, by editing which the user may influence the results of the recommendations. (Google Support, 2016b) Kennedy and Moss (2015) suggest the distinction in between “known” and “knowing” publics. Their paper speaks of the beneficiaries of social media data mining urging closer attention to the alienation from the data experienced by the general public. Fleischer (2015) is building upon a similar concern saying that venture capitals are employing hacker surplus. At the same time, the state is not investing enough on hiring information technology talents. At the same time, the public is lacking the skill to monitor the data daily collected by corporations. Initiatives like Minun Data (My Data) group in Finland are aiming to change this situation by developing a possible architecture of a data repository for personal data awareness (Poikola, 2013). Similarly, there is promising potential for the general public to get to know themselves hidden in the jungle of social media data (Kennedy and Moss, 2015). Some individuals may not yet associate Google’s output with their personal space. Yet the bond in between them rapidly becomes tighter and personalisation of search results is raising moral and technical questions (Feuz et al., 2011). The average social media user does not yet experience significant narrowing of information caused by the machine-filtering of the feed (Thurman and Schifferes, 2012). The general audience may never have heard of companies like MetaFilter taken off the market solely by Google’s decision. On a day to day basis, we let Google Maps decide upon “the closest” supermarket or Facebook to tell us “events our friends are going to today” (Haughey, 2014). And these decisions are made by the machine “subjectively”, based on its image of us that the system generated through its data. The minor subjectivity of the system’s response is likely to go unnoticed: we are likely never to pass the neighbour’s block to find a store not marked on the map or an event that the Facebook algorithm excluded, simply because the friends going there did not chat with you in the last few months via the platform. The situation dramatically changes when we think of letting the machine make choices of life and death. An ethical dilemma of this kind was illustrated by TED-Ed (Lin et al., 2015) on the example of the self-driving car. The dilemma unfolds with the question, what should the car’s onboard system prioritize in the case of the accident. Should the car be saving the driver ignoring all possible casualties on the road — it may turn into a pre-programmed homicide machine. On the contrary, what if the onboard computer is going to make a calculation of victims and decides to spare the driver, should the driver be warned? When we let the machine decide for us, we deserve to know the reasoning behind the choice. This level of objectivity is currently not present in web design.
This is the objective web, which we can strive for.