Can we design for transparency?
This post was written for Molly Steenson’s Seminar I: Interaction & Service Design Concepts 2016, CMU School of Design.
We are living in the age where information on every step of our lives are being collected and analyzed. From the way we shop, to where we shop, the food we eat, the news sources we visit, our connections on Facebook and LinkedIn and so on, all this information lives on in the web database systems and is accessible to anyone actively searching for it. This information, which we feel is convenient to us as users, is critical to corporations and data mining industries. As Alice Marwick puts it bluntly — “companies systematically collect very personal information, from who you are, to what you do, to what you buy. Data about your online and offline behavior are combined, analyzed, and sold to marketers, corporations, governments, and even criminals.” She defines this industry of collecting, aggregating, and brokering personal data as Database Marketing (1).The growing industry of database marketing is constantly collecting our metadata through our phones, computer, credit cards and even through our medical and personal purchases. At this point, many users will wonder, what metadata is and how these companies are using it to their profits. Metadata is data that describes other data.(2) Metadata is a summary of the most basic but crucial information about data. In the example of a phone conversation, the metadata is the time, duration, location, server, telecom provider and output location. It does not include the actual contents of the phone conversation itself. This summary of information makes finding and working with particular instances of data easier.
Many private corporations put forth the argument that metadata is not a threat to your privacy. David Cole outlines in his article “We kill people based on metadata”, about how that argument is misleading to the users. He argues that in the example of a phone conversation — “the metadata can provide an extremely detailed picture of a person’s most intimate associations and interests, and it’s actually much easier as a technological matter to search huge amounts of metadata than to listen to millions of phone calls.” Metadata is further weaponized as private companies feed it into algorithms which can dictate the many facets of our digital identity.
Algorithms are built on codes and math. It processes a range of statistics and comes up with probabilities(3). These probabilities are absolute in the eyes of the machine and do not take into account any other factors other than your metadata. For example, your information that you save on your health app can define your current health status. If this information would be available to your medical insurance provider, they would have a strong basis to change your insurance status.
This interchange of information can prove to be harmful to you. Google takes your search history and feeds it into their algorithms, which analyze what you are surfing on the web and how it can cater search feeds to your tastes. This is not always a bad thing. Based on my recent shopping search, Google provides me with many options of shopping avenues and starts directed marketing of brands it knows is available around me. But behind the scenes, my search information is relayed to prospective brands and they target me as a probable customer, sending me advertisement notifications in all my browsing web pages. Google’s algorithms also start doing certain invisible algorithmic editing on my search feed, pushing me towards brands that have paid for subscription space rather than on relevancy. Eli Parser defines this as filter bubbles — our very own unique universe of information that we live in online(4). In his TED talk, he states that the internet is showing us what it thinks we want to see, but not necessarily what we need to see. Filter bubbles decided through algorithms don’t let us decide what information is getting in and on the other scale, does not let us see what information is being edited out.
So what kind of measures can we take to help users understand metadata and motivate them to take ownership of their data? If we change the perception of “data” as an object to amore human approach of people’s identity? An example I came across on the web was Immersion, a tool created by MIT Media Lab that dives into the history of your email life in a platform that offers you the safety of knowing that you can always delete your data. Once you give it access to your email, based on your email history and metadata, it shows you with the information that google collects. You are then educated about how you can delete this information, and build a more private email environment. The tool itself is created with transparency for its users, where one can delete their metadata collected by Immersion after its analysis. It provides the users with an image that makes you indulge in self-reflection, art, privacy and strategy, as well as a number of different perspectives by leveraging on the fact that the web, and emails, are now an important part of our past. By incorporating more rational ethics and values into our systems, we can aim to provide transparency to our users.
(1) Alice E. Marwick, “How Your Data Are Being Deeply Mined”
(3) Cathy O’Neil, “Weapons of Math Destruction”