Shaping the 4th Industrial Revolution: Is it time to rethink informed consent?

This post is by Tolu Odumosu, Assistant Professor of Science, Technology, and Society and Electrical Engineering at the University of Virginia.

The 4th Industrial Revolution (4th IR) is a widespread multifarious thing that deserves critique, analysis, and in-depth evaluation. If this process affords a moment of collective and reflexive self-awareness of the rapid and accelerating transformation of social and technological life on the planet, my only reaction is that such a moment is well overdue. For this author, discussions of the 4th Industrial Revolution resonate with a long-running theme in the work of Langdon Winner’s political philosophy, i.e. the idea of technology as a force that has escaped control [1].

The multifarious nature of the 4th Industrial Revolution makes any analysis rather difficult. The scope of change covered under the rubric of the 4th IR is widespread, covering the entire gamut from biotech to infotech. In the interest of dealing with a digestible portion of this complex phenomenon, this essay will limit itself to networked socio-information technologies, though large portions of the analysis could easily be applied to bioinformatics as well.

Information wants to be free?

Where to begin? What about the recent episode where both founders[2] of WhatsApp left Facebook over clashes about privacy? In 2014, Brian Acton and Jan Koum sold WhatsApp to Facebook for $19 billion. In March of 2018, Acton — having left — encourages his followers on Twitter to “delete Facebook”[3]. Koum would also go on to leave Facebook in April of 2018.

For American analysts, the question of what is WhatsApp may very well arise. WhatsApp’s largest user base is in non-Western countries. It is probably the most popular messaging app on the planet, with over 1.5 billion users [4]. It has acted as a critical market disruptor in the African countries that I study (for example, Nigeria), upending the closed-messaging systems of mobile wireless network operators. WhatsApp is perhaps best understood as a hybrid platform, falling somewhere in between a full-fledged social network and traditional discussion boards and messaging platforms. Supporting video-sharing, rich multimedia-posting, group messaging and Voice communication — all taking place with the protection of class leading end-to-end encryption — WhatsApp has quickly become the most important communication app in the non-Western world, eclipsing Facebook’s own messenger. Unlike Facebook, which grew up on the web, WhatsApp is a mobile-first app. Having evolved on mobile devices, it is perhaps better-suited to the mobile-first lives of the global south.

Koum and Acton famously pledged to preserve WhatsApp’s strong privacy protections when they sold it to Facebook. In essence, they promised a firewall between WhatsApp and Facebook. It only took two years before the terms of service for WhatsApp were updated to allow for information to flow between these two platforms. Not much happened in the United States, as a result of this. However, the European Commission fined Facebook[5] for misleading regulators about its plans when it acquired WhatsApp.

This episode points at the scope of the challenge. How much can two well-intentioned billionaires — who are privacy-loving and mobile-first innovators — accomplish through innovations and the technological wizardry of encryption? Not much, it turns out, provided that Facebook now owns their product.

Is our only recourse to #deletefacebook — to throw out the dirty bathwater and the baby?

Consent as Blanket Permission

One way to begin to break the problem down is by examining the excruciatingly fascinating testimony that Mark Zuckerberg provided to the United States Congress in April of 2018 [6]. A few quotes from that testimony offer an entry point into the discussion:

That’s why, every single time you go to share something on Facebook, whether it’s a photo in Facebook, or a message, every single time, there’s a control right there about who you’re going to be sharing it with … and you can change that and control that in line.

To your broader point about the privacy policy … long privacy policies are very confusing. And if you make it long and spell out all the detail, then you’re probably going to reduce the per cent of people who read it and make it accessible to them.

Let’s leave the question of Russian meddling alone for the moment — a large part of the uproar that led to Zuckerberg going to Washington was over the Cambridge Analytica scandal. Cambridge Analytica, a British political consulting firm, was easily able to gain access to over 80 million users through a small seed of about 270,000 users, who voluntarily agreed to share their data. Given that the greater portion of the 50 million did not explicitly grant consent, either Zuckerberg was simply not telling the truth when he stated that “…there’s a control right there about who you’re going to be sharing it with…” or users don’t have the same ideas of “control” and “sharing” that Zuckerberg assumes they do.

The second quote provides us with some understanding of where the CEO of Facebook lays at least some of the blame. Long privacy policies are confusing, and spelling things out in detail often makes things less accessible to users. In other words, this classic move made users out to be deficient in some way. The deficiency, in this case, was their ability to understand lengthy privacy documents. How could Facebook users realize that sharing their data with the [7] app “thisisyourdigitallife” meant that they were agreeing to provide their information — and that of their friends and family — to a British political consulting firm? When those users pushed the buttons that released that information, did they know or understand the full ramifications of what they had just agreed to? I contend that here is where the productive conversation needs to take place: not along the question of deleting Facebook, but rather along the question of consent. Consent IS the crux of the matter.

In a sense, Zuckerberg is right that there are controls. We can choose to say no to the shrink wrapped licenses that come up on our screens — that most of us ignore daily. This calculus is not a difficult one. We don’t say no, however, because we have grown to need these services, and the licenses are too hard to read. The informational services that the 4th Industrial Revolution offers save time, create convenience, shrink the globe, and make new kinds of sociality possible — sociality that was unimaginable a few years ago. The problem is that, time after time, it is only in hindsight that we realize we have signed up for a Faustian bargain. Faust, however, exchanged his soul for knowledge and power with both eyes open. The vast majority of participants in the new information and mobile economies don’t realize the nature of the bargain to which they are agreeing.

The problem, of course, isn’t Facebook’s only. Google is now collecting voice-prints of individual voices and moving these around various devices to facilitate its voice recognition, in order to identify individual voices in the home. Not to mention, Google already has terabytes of information about individuals from their Gmail accounts. At every point, users are required to grant “consent.” Who knows how these disparate data points could be used in the future? Amazon’s Alexa service also requires “consent” to record voice conversations. Alexa is showing up everywhere these days, from televisions to cars to thermostats and headphones. Voice-controlled artificial intelligence is a fast-growing segment of the 4th Industrial Revolution and, at each point, users are “asked” to “enable permissions.”

Here, the ideal of “consent” has been broken. There is insufficient time or space in this short piece to review the historiography of the notion of consent, particularly as it has co-evolved with medical practice. In the interest of brevity, let us work with a commonsense, contemporary understanding of consent (as problematic as this may be). It is clear that for Zuckerberg, and for Facebook, consent is a full and exhaustive turning over of rights. The uproar over the Cambridge Analytical scandal, and all the other numerous everyday scandals concerning the monetization of individual data records, indicates that there is a significant gap between public expectations and corporate behavior. I believe that we can productively explore this dissonance by examining the notion of “informed consent.”

How can consent be informed?

The notion of consent undergirds much of the implicit social contract of the data revolution that humanity is currently experiencing. Consent is given by accepting terms of service, usually by reading a piece of text that is meant to lay out the extent and implications of the consent. Then, the potential user has the ability to agree or disagree. Disagreement usually results in the service being inaccessible. There are a few problems with this picture, though. One is the idea that consent, once given, is sufficient to cover all potential uses. Facebook reacted to the Cambridge Analytica scandal by increasing privacy controls. While this is very welcome development, it does not address the more fundamental problem of informed consent. Even if one agrees to share information with an app running on Facebook’s platform — as the core 270,000 individuals did — does that imply that one has also agreed to all the possible uses that the app could put one’s information to? What about friends and family? For how long? Data has no expiry. What happens in the case of one’s death? There are no quick or easy answers to these questions. I only raise them here to illustrate the fallacy of blanket consent. The level of complaint and outrage over the Cambridge Analytica scandal implies that user expectations around consent are being breached in ways that break social contract. It seems that, for some people, and in some ways, consent is limited and particular.

The EU fines for Facebook’s merging of WhatsApp data and Facebook data also illustrates this point. Having purchased WhatsApp, and claiming that they would maintain a firewall between both services, Facebook reneged on that two years later. The EU’s reasoning was that the capability to automatically link identities and profiles existed at the time of the merger. Therefore, reneging on the commitment, even two years later, merited the fine. Again, permission, or “regulatory consent” given to Facebook, was limited and particular.

We see an uproar when data breach scandals become public. This speaks to a breaking of the implicit social contract about consent and use. I do not mean to imply that the social contract is static. Indeed, public expectation of privacy is shifting. Each breach results in increased apathy, and a bleary acceptance of the Faustian bargain. However, this cycle is a reactionary one, and is not the mode in which we should be engaged. Rather, we should engage in a way that is critical, and inquiry-based. We should be more interested in the kind of world we want to live in, and less interested in living in a world as imagined by the corporate entities that control much of the infrastructure of cyberspace.

Rethinking Consent at a structural level

Here are two proposals that might be helpful to engage with, if we truly want to shape [8] the 4th Industrial Revolution as Schwab proposes. First, though, we must clearly recognize that the struggle over control of data is a struggle of power and infrastructure. The state must play a role in this struggle, for it is a classical asymmetrical struggle where individual power is minimal and market forces are dominant.

Proposal 1: For consent to be informed, it must be particular.

This idea simply embodies elements of the implicit social contract of the consent process. Users may not give consent for uses that lie outside of their implicit and explicit understanding of the agreement they are consenting to. This has a number of implications. First, it shifts responsibility for documenting the context of the consent agreement onto the service provider. It should be in plain and simple language. If it cannot be stated in plain and simple language, then consent cannot be given (i.e. the consent sought is too broad). Another implication is that service providers have to re-request consent if they wish to use data gathered in a manner inconsistent with the initially-given consent.

Proposal 2: Truly informed consent requires the possibility of the dissenting user.

In Sally Wyatt’s [9] classic article about the importance of Non-Users and the Internet, she discusses various categories of non-use. The 4th Industrial Revolution heralds the arrival of near-omniscient datasets, where the question isn’t if one wants to participate, but how one will participate. The possibility of active dissent is necessary for informed consent, and this kind of user needs to be recognized, imagined, and coded at the level of network infrastructure. At present, the data architectures of the 4th Industrial Revolution don’t have a category for active dissent. Introducing such an active category would result in significant change.

A simple example should suffice. If one was able to generate a marker for this category, then artificial intelligence systems and data collection systems would have to actively delete or ignore all data that they collect about the recognized user/individual. If Google saved one’s voiceprint as an active dissenter, then every time a Google’s AI recognizes that voice print, it would be required to actively scrub any data collected on that individual. If one was an active dissenter on Facebook, then the network itself would have to work to actively audit all data transactions to ensure that the category is respected. The concept lends itself to video tracking, and to other surveillance systems. Once the video recording system recognizes one’s classification as an active dissenter, it would be required to scrub its databanks of material relating to that particular user. The implementation of such a category/classification is beyond the scope of this article however. What is clear though, is that there is a critical role for regulatory oversight and the engagement of the State.

References

[1] “Autonomous Technology: Technics-out-of-control as a Theme in Political Thought — Langdon Winner — Google Books.” [Online]. Available: https://books.google.com/books?id=uNIG0gi4b40C&printsec=frontcover&dq=autonomous+technology&hl=en&sa=X&ved=0ahUKEwippf2S24DbAhXL34MKHV8FCFcQ6AEIJzAA#v=onepage&q=autonomous%20technology&f=false. [Accessed: 12-May-2018].

[2] N. Statt, “WhatsApp co-founder Jan Koum is leaving Facebook after clashing over data privacy,” The Verge, 30-Apr-2018. [Online]. Available: https://www.theverge.com/2018/4/30/17304792/whatsapp-jan-koum-facebook-data-privacy-encryption. [Accessed: 12-May-2018].

[3] J. Horwitz and J. Horwitz, “One of the men who sold WhatsApp to Facebook wants people to #deletefacebook,” Quartz. .

[4] “WhatsApp hits 1.5 billion monthly users. $19B? Not so bad.,” TechCrunch, 31-Jan-2018. .

[5] “European Commission — PRESS RELEASES — Press release — Mergers: Commission fines Facebook €110 million for providing misleading information about WhatsApp takeover.” [Online]. Available: http://europa.eu/rapid/press-release_IP-17-1369_en.htm. [Accessed: 12-May-2018].

[6] “Congress grills Facebook CEO over data misuse — as it happened | Technology | The Guardian.” [Online]. Available: https://www.theguardian.com/technology/live/2018/apr/10/mark-zuckerberg-testimony-live-congress-facebook-cambridge-analytica.[Accessed: 12-May-2018].

[7] “Facebook Exposed 87 Million Users to Cambridge Analytica,” WIRED. [Online]. Available: https://www.wired.com/story/facebook-exposed-87-million-users-to-cambridge-analytica/. [Accessed: 12-May-2018].

[8] K. Schwab, The Fourth Industrial Revolution. Crown Business, 2017.

[9] S. M. Wyatt, N. Oudshoorn, and T. Pinch, “Non-users also matter: The construction of users and non-users of the Internet,” Now Users Matter: The Co-construction of Users and Technology, pp. 67–79, 2003.

--

--