Zuckerberg and Aleksandr Kogan walk into a bar…

Snehan Kekre
8 min readApr 12, 2018
Photo: Olivier Douliery, TNS.

I recently watched Mark Zuckerberg testify to the Senate judiciary while not under oath. This hearing was in response to the revelations by whistleblower Christopher Wylie that an app created by psychologist and data scientist Dr. Aleksandr Kogan, collected and sold detailed profiles of as many as 87 million Facebook users to Cambridge Analytica. He was questioned on issues ranging from user-privacy, data security, to Facebook’s business model.

By the end of the hearing, I was left with more questions than answers. While attempting to synthesize the various sources of information on this issue, I permitted myself to think laterally about the societal implications of the same. Two broad themes that stood out to me on introspection were the ethics and governance of technology. It seems as good a time as any to engage in discourse on these topics in light of developments in sophisticated algorithms and increasingly pervasive autonomous systems powered by artificial intelligence and machine learning.

My friend Jess, a technologist, said on Twitter:

The tweet captures the sentiment of a lot of my left-leaning and centrist friends who echo that Facebook is to some degree bad and morally culpable because of its decisions to make the privacy settings of its users obscure and challenging to configure. Often, I catch myself making similar normative statements as well. However, there are comparatively fewer instances where I’ve paused and then further questioned why it is that I issue sentimental pronouncements on things. It isn’t too much of a generalization to say that a majority of us seem to eat from this can of a vague sense of unease about the state of modern technology and where exactly we fit into the picture.

Photo: A Vague Sense of Unease, Hoxton Street Monster Supplies.

Might it then be helpful to have a theoretical framework or a broad class of frameworks to analyze our unease about modern technology? It may be the case that through these frameworks we can hope to understand how we feel about technology and, more normatively, about how technology should be, rather than just where it is and how it is regulated. Lawrence Lessig in 1999 provided one such framework in his paper The Law of the Horse: What Cyberlaw Might Teach. That paper and several of his subsequent publications, including his book Code and Other Laws of Cyberspace, are centered on the notion that each one of us is a single central dot (pictured below) in the world, subject to a confluence of forces much larger than us.

Photo: The Law of the Horse: What Cyberlaw Might Teach (Draft), Lawrence Lessig.

Law is one of these forces that we are intimately familiar with. We have tried and tested processes by which laws come to be and how we can be regulated by them when we do or do not comply. As mentioned earlier, we are familiar with regulation through norms too and how they shape behavior. A stark example of this from the hearing is when Mark Zuckerberg, on more than one occasion, says that Facebook offers its users close to complete control over their privacy settings. In fact, they even nudge users periodically so that they are reminded of the accessibility of these privacy settings. That is to say that as long as Facebook operates under the laws and regulations around user privacy, all’s well and good from their perspective.

However, my friends and others might argue that the responsibility of an entity, be it private or public, does not stop at what the laws say. Further, those laws do not remove the need for ethical considerations as they are merely a baseline. A valid question to ask next is “Who are the enforcers?”. The hypothetical perpetrators here are supposedly unscrupulous. So their sense of responsibility might only extend to meet the minimum requirements as codified in law. Is there really a dichotomy between social responsibility and legal obligation? I am uncertain. I think ensuring responsibilities are followed, is a more complex topic that has more to do with education than enforcement.

We can also talk about the market perspective. As there a multitude of structures around markets, it is obvious that the way markets work do influence what we can do. If Facebook made it easier for users to configure their privacy settings (especially those related to advertisements), it would be an act that was completely anathema to their core business model! Implementing some sort of forcing function or behavior-shaping constraint so that users are prevented from acting against their own interests by default, seems laughable from Facebook’s standpoint as it would make it that much more difficult for them to be a good platform for advertisers.

From the norms perspective, one could argue that this is precisely why Facebook is fundamentally bad, and why it is not merely just an implementation detail gone awry. This in effect says that Facebook simply cannot exist without exploiting its users. If you are of the view of some of my extremely left-leaning friends, you might even be quite happy to see Facebook go down in metaphorical flames.

As is the case with companies like Facebook, we cannot ignore the nature of monopolies and network effects. A valid counter-argument against governmental regulation of Facebook is that the consequences of doing so may, in fact, hurt a lot small businesses that primarily rely on the platform for marketing, sales, communication, etc. Zuckerberg additionally stresses that Facebook in its current structure is a platform that connects more than a billion users to people who matter to them. Regulating Facebook would then also mean denying these very people of their primary means by which to be connected to people they care about!

I am currently of the view that this goes beyond Facebook and is more a systemic problem. In short, the reigning legislative, technological, and economic paradigm has allowed surveillance capitalism to become the bastion of successful businesses on the Web.

From a politically progressive point of view, you could argue that Facebook is bound by an intrinsic responsibility on its part. But from a market perspective, pushing the confines of the law and morals is perhaps intrinsic to free-market capitalism and innovation. Someone subscribing to a more liberal point of view can argue that, as a private entity, Facebook hasn’t forced anyone to do anything and that everything they did were in the terms of service (TOS), and as such are legally free to do anything set out in the contract between two entities. A realist may think, as with everything, it’s a compromise. Facebook has been evil and tried their hardest to look sincere. The discussion now needs to be around why society as a whole didn’t pick up on this sooner and how we make a cultural shift to a better understanding of who we are and the value of our data in an increasingly digital world.

As an aid to steering away from an impasse, we can look for some common ground. Though it seems like a gray area, we can all agree that the laws need to be revisited to accommodate modern technology, and Facebook as an institution needs to be more transparent and ask for active consent. To prevent our descent into victim blaming, education around legislative and technical literacy is paramount. It is clear that we cannot approach an agreement to legislate if the lawmakers don’t understand the technology enough. Similarly, technologists are bound to run into legal troubles if they do not sufficiently understand the laws of the land.

The foundation of why Facebook could do all of this is because of our lack of understanding. I can name so many people in the kind of tech circles I find myself in that have been outspoken on this kind of thing for years. If you let me for a moment inject my heavier biases into this I would say that we can let a government take reactionary measures that could damage the Internet (see also net neutrality) in the case that they just don’t understand or we, as a people, can speak out with our voices and wallets. Money talks and when Facebook stock plunges, it talks pretty fucking loudly.

So, with that ramble out of the way, I don’t have a clue where we go from here
other than to say that the conversation about the ethics and governance of modern technology needs some revisiting with the involvement of multiple stakeholders. The US government as I understand it is approaching this conversation from a purely regulatory angle. Their hack around actually having a conversation around the ethics and governance is to regulate companies. That they have addressed ancillary concerns is used as a get-away-for-free card to avoid discussions about systemic issues.

The cat is out of the bag scenario:

Meme: You Let The Cat Out Of The Bag? Cat Planet.

As a species, progress has come to us naturally. Progress is a loaded term that is multi-faceted and has its costs. Philosopher Nick Bostrom said, “When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as might disclose an unnoticed crucial consideration.

With regards to major technological progress, specific instances of technological innovation have brought about significant and horrific devastation. Take for example nuclear weapons. When nation states armed with nuclear arsenals are in conflict, the expected utility of mutually assured destruction is popularly understood to keep the worst at bay. In practice, the strategy of nuclear deterrence has failed miserably and is arguably insane. Here is where I often hear people saying that the cat is out of the bag. “You can’t kill ideas. You would be stifling innovation.” This line of reasoning is often applied to perceived negative technological advancements.

I guess it is a hard problem — creating a sandbox around technologies whose capabilities and consequences we’re not even fully aware of at its inception. Can we as technologists, policymakers, creatives, and individuals come to a consensus on a radical framework that allows for meaningful defenses against negative applications of technology? I do not expect a binary response, but rather pose the question as a moonshot. Perhaps using education among other levers, we can decide, in the far horizon, to change the culture and practices around research and innovation so that safeguards are something we naturally consider. That we have allowed 3D printed guns to be within the reach of anyone with a 3D printer should by default be considered a failure mode of the current paradigm of innovation.

I believe we have a lot of room for recourse. What do you think? Let me know in the comments or in person!

--

--

Snehan Kekre
0 Followers

Student and security researcher. I'm passionate about cryptography, privacy, science and human rights. GPG keyID: 0x9EB238919CDF9184 | ricochet:csfv5icrlxdexd3m