Unspeaking the Truth: What Mark Zuckerberg Didn’t Say

7 min readApr 16, 2018

Imagine, if you will, a kitchen knife. If you were asked about the purpose of the knife, you may think “to chop some carrots” or, more indirectly, your intentions may be to eat healthfully, save money, or take time to sit down with your family. In any case, your intentions are likely to cut up food for a meal.

Now let us suppose a crime has been committed. The police, in searching your home, take fingerprints from your knife, analyze them, and even add their findings to a national fingerprint registry. If you were asked whether those were your fingerprints the answer would most likely be yes. While the fingerprints may not have been something you intended to create with the knife, you still made them. In other words, you unintentionally created some content. You have also indirectly helped to expand the fingerprint registry, perhaps assisting with the development of new information and improving analytic techniques.

Are fingerprints your data? Is it possible to say for sure whether you put them on the knife intentionally? After all, you intentionally used the knife, and as a rational person you likely knew that it would hold your fingerprints.

Facebook’s model works very much like this example, except they are the knife maker, they own the knives you use, and they are also the police.

To parallel the knife scenario, suppose I open Facebook on my browser. I might navigate to my profile, click “update profile photo,” and upload a new picture. I may then type a quick post about something that happened to me that day, read it over, make some changes, and post it to my Timeline. I might change who can see this post, maybe my “Friends” or even “Only Me.” In short, I have just used the interface (the knife) to create content directly (chopping some carrots) and, indirectly, to stay in contact with friends and family (make dinner).

Now is when the process begins to blur. If I were to ask, “what is the purpose of the profile photo or Timeline post?” I would overwhelmingly receive answers situated in intentional direct and indirect uses — “to update your profile picture,” “to let your family know what you are up to,” and, from the more aware users, “to add data to my profile that will later be used for targeted advertising.” None of these are wrong, but they all miss some fundamental truths.

What I really learned while watching Mr. Zuckerberg’s testimonies last week is that our mainstream lack of sufficient and effective terminology for what digital conglomerates like Facebook can do is proving to be increasingly dangerous. What we should take away from his responses is not so much what he said, but a few of the important things that he cunningly did not say.

Take, for example, a telling interaction that occurred between Representative Gene Green and Mr. Zuckerberg in relation to downloading your Facebook data (emphasis mine):

Zuckerberg: Congressman, I think we may be updating it a little bit. But, as you say, we’ve had the ability to download your information for years now. And people have the ability to see everything that — that they have in Facebook, to take that out, delete their account and move their data anywhere that they want.

Green: Does that download file include all the information Facebook has collected about any given individual?

In other words, if I download my Facebook information, is there other information accessible to you within Facebook that I wouldn’t see on that document, such as browsing history or other inferences that Facebook has drawn from users for advertising purposes?

Zuckerberg: Congressman, I believe that all of your information is in that — that file.

They key word here is “your.” And, to truly understand Facebook’s model, we must first understand what they consider “your” data to be. In creating a tool from a collection of elements — such as text fields, buttons, image uploaders, message windows, etc. — Facebook has put a very strict limit on what they consider “your” data. Throughout his testimony, Mr. Zuckerberg was careful to differentiate “your profile” and “your data” from “their profiles” and “their data” about you, by never mentioning the latter.

In other words, Mr. Zuckerberg spoke all about your carrots and your dinner, but never mentioned your fingerprints or the national fingerprint registry. Certainly, he believes that users own their intentionally created content. But what about the byproducts? Why did Mr. Zuckerberg never talk about the fingerprints and national registries?

Facebook is forthcoming about what happens with our posts and the related meta-data (e.g., tags, locations, and sharing permissions) that we have intentionally provided (the chopped carrots). These are the data, as Mr. Zuckerberg notes above, that we can download. However, Facebook is much less forthcoming about the data they assume we have unintentionally provided (the fingerprints) and the data that Facebook itself derives from our contributions (the registry).

One of the main strategies that Facebook uses to side-step this topic is to focus only on notions of social privacy, rather than institutional privacy. While your social privacy includes which of your content other internet users are privy to, your institutional privacy has to do with what Facebook themselves can see, and what new content they ultimately derive.

In fact, Mr. Zuckerberg’s definition of privacy is an outdated notion of privacy that only includes social considerations. If, for instance, we set the permission, what Facebook calls the “audience,” of a post to “Only Me,” we expect that coworkers, family, and the public will not be able to view it. However, what about Mr. Zuckerberg himself? Or members of a Facebook research team? Or a new data analysis tool developed by Facebook? How would we ever know? Suddenly “Only Me” seems potentially misleading.

In addition, this antiquated definition of privacy imposes what Facebook assumes your intentions are while interacting with its interface. By specifying which content will allow you control over its audience, the site implicitly tells you what the purposes of the platform are. You have control over the audience for your images, but they offer no control over your fingerprints, which, in this case, are things like mouse clicks and other websites you visited.

The policy of secrecy is exceptionally easy to enforce on their side. The processes that collect this kind data are completely hidden from the user, and internet users typically have no clue what data even can be collected. Most internet companies shy away from giving you such a list when you sign up for their services as there might be backlash from users if they knew that all of their mouse movements, mouse clicks, keyboard strokes, microphone and webcam inputs, and in some cases even their eye movements and facial expressions can be recorded and tracked. Indeed, there was some uproar when it was revealed that Facebook itself had been holding on to metadata of posts that users typed-in but never published. Can users download that data?

At first listen, it would seem that, during Mr. Zuckerberg’s back and forth with Representative Green, we can. But, because of our mainstream lack of terminology, and Mr. Zuckerberg’s exploitation of this fact, he said yes, “all of your information is in that file” even though he meant “yes your carrots and dinner are in that file, since those are yours. The fingerprints and registry are ours, so no, they aren’t in your file.”

Falling further down the rabbit hole we realize that at Facebook HQ they can collect and use any of the data you intentionally or unintentionally spill into their app. Entire teams depend on these data to make interfaces changes and establish targeted advertising domains. Other teams are actively mining these data to derive new sets of data, sometimes used to alter out voting patterns and prey on our insecurities. The mining of these patterns, and the implied manipulation of them, is what many people refer to as “algorithms.” Often spoken about in hushed tones in mainstream media, algorithms themselves are benign. The process of long division that we learned as children, for example, is itself an algorithm, the Euclidean Algorithm, but there are few pundits blasting the airwaves with calls to arms to topple the nefarious long division machine. Instead, it is the opacity of the creation, the collection, the manipulation, and the secrecy of the results and their application that can prove harmful to users.

Technical matters aside, where do we stand? The current round of inquest has shown that Facebook isn’t keen on sharing with users the specifics about what types of data it collects (aside from letting you know that it can be anything at all), how it handles data (however they want), what information they derive from your data (ibid.) and what they intend to do with this new information (ibid.). You can download your data from Facebook at any time, but it is imperative to note that your “fingerprints” and other byproducts are assumed unintentional use and are not included.

If we take our observations about Mr. Zuckerberg’s artfully worded responses this past week and combine them with what data can be downloaded, we can roughly begin to decipher what data are owned by users and what data Facebook owns:

Your data include: profile picture, gender, place of work, hometown, friend networks, posts, uploaded photos, pokes, private messages, and apps you have approved. These are all data points that are easily accessible through your profile, but the download creates a neat zip file for you.

Their data include, at least: all the data derived from the above data as well as mouse clicks, eye movements, login tokens through their Open Graph (social plugins, external logins, and so on), posts you typed and decided not to post, and your DeepFace, 3D model.

In other words, we need a much more modern way of speaking about and understanding digital spaces, with particular attention paid to the difference between “your” data and “their” data. Important distinctions include: frontend vs. backend; social interactions vs. institutional interactions; input data vs. derived data; and profiles vs. databases.

We are only now starting to see that users need to be more aware of how their interactions online are used and how these interactions can be leveraged against them. But hope is not lost. The first steps to better policies won’t come by pretending that our interactions can go unrecorded, but by trying to get control over the data we create. All the data we create. Whether they be intentional, unintentional, or even assumed as unintentional, in case we aren’t using Facebook as the designers anticipated. I don’t just want to own the carrots and dinner, I want control over my fingerprints. Oh, and the registry.

--

--

Angela M Cirucci
Angela M Cirucci

Written by Angela M Cirucci

Assistant Professor of New Media specializing in the symbolic meaning of programming languages and the intersection of institutional practice and user knowledge

No responses yet