The misleading name, metaphor defiance, and awesome potential of “personal data” — part 2 of 3
In the preceding post I proposed that reframing personal data as interpersonal data is much more appropriate, more useful, more valuable. I also asserted that data is data — i.e. not like anything else. To support these points, this post explores and dismisses the dominant conceptualisation of personal data as property, and then reviews the less well-known data-as-labour framing.
The problem of the way we frame the opportunity and problem
Data-as-property
Let’s linger a while on markets and the fundamental components of money and property. I’m working on the supposition that cryptonetworking may be integral to a future interpersonal data architecture, and given that Bitcoin is the genesis of cryptonetworking, it’s doubly instructive to reflect on money.
Money is attributed value, but value is far deeper and broader than the mere monetary sense — in the money can’t buy you love sense for example. Nevertheless, our current civilization is monotheistic in its veneration of the market, and our first inclinations when grappling with any shiny new idea is to see if we can’t quantify its value and subject it to the manipulations of Adam Smith’s invisible hand. Despite quite substantial evidence to the contrary, our idolatry of this mechanism truly marks it out as a religion imho.
Sometimes free markets work best. Sometimes well-regulated markets work best. Sometimes, markets don’t work best.
The vast majority of our digital interactions are mediated by for-profit entities operating in markets. This means for example:
If you are not paying for it, you’re not the customer; you’re the product being sold.
This isn’t peculiar to the digital age. The earliest statement of this ilk that I can find dates back to 1973:
The product of television, commercial television, is the audience. Television delivers people to an advertiser.
Richard Serra and Carlota Fay Schoolman, Television Delivers People, 1973
Nevertheless, digital media pervade our lives in ways that make television seem simply quaint; the consequences are incomparable.
Motivated in part by the recent parametric shift of regulation, there are more than a few parties out there looking to help you “take back control” of your data and monetise it. For example:
- Bitclave “blockchain-based decentralized marketplace”
- Databook “control and monetize your data”
- Datacoup “sell your anonymous data for real, cold hard cash”
- Datawallet “get paid when you share your data”
- Datum “monetize your digital life”
- Enigma “users can collectively and independently monetize their data”
- Hu-manity “participate in the human data marketplace”
- Madana “market for data analysis”
- Metame “enrich your life with your data”
- Opiria “earn money with your data”
- People.io “connect with the value of your data”
- PikcioChain “the monetized personal data marketplace”
- Sun “for data ownership and privacy … and monetization”
- Truth Data Cloud “Your data. Rewarded”
The rationale appears straightforward. Some high-profile companies have generated significant revenues and accrued substantial market values from developing products and services that create, harvest, and process personal data. The market has therefore awarded personal data monetary value, and if we now get the kind of control over it a regulation such as the GDPR requires, then we can own it and sell it.
Data is the new oil.
Ka-ching!
Except that oil is rivalrous (its consumption by one consumer prevents simultaneous consumption by other consumers), whereas data is naturally non-rivalrous and quite possibly anti-rivalrous in some contexts (where the value is all the greater the more it’s shared around). Moreover, do you really own the light reflected from your face, or the fact that you have roots in a specific ethnic group? No. Nevertheless, you should expect not to have this information used discriminately. Light reflected is one thing; its analysis and association with your identity (facial recognition) is quite another, let alone the actual application of such information.
Many of these “monetize your data” projects have been inspired by Project VRM, albeit with the urgency of a corresponding business model not necessarily integral to the VRM (vendor relationship management) vision. VRM tools are customers’ counterpart to vendors’ customer relationship management (CRM) systems. VRM tools give customers greater control; as the sub-title of the book The Intention Economy puts it, it’s When Customers Take Charge. With such tools:
… liberated customers enjoy full agency for themselves and employ agents who respect and apply the powers that customers grant them.
Doc Searls, The Intention Economy, 2012 (check your library)
VRM calls the customer the first party. The second party is the vendor. The third party is vendor-driven, and on the vendor’s side. The fourth party is customer-driven, and on the customer’s side. And while the fourth party tools have been labelled customertech, we can generalise the role beyond that of customer to, say, citizen, or any role the individual might occupy. In his book, Doc places significant emphasis on personal agency, albeit still in the context of markets if only to catalyse a market response somehow.
Agency is personal. It is the source of confidence behind all intention. By its nature, the networked marketplace welcomes full agency for customers.
Despite such clues, too many have constrained their imaginations, ignored agency, and jumped on the monetize-your-data train. I was similarly guilty on writing The Business of Influence nearly a decade ago.
Doc had previously co-authored The Cluetrain — way back in 1999 — and despite self-quoting in The Intention Economy, too many it seems do not see beyond a binary world of big organizations and little people connected by market transactions.
For thousands of years, we knew exactly what markets were: conversations between people who sought out others who shared the same interests. … Conversation is a profound act of humanity. So once were markets.
And if that wasn’t a big enough hint, this last year:
Looking at VRM or customertech [market] opportunities through the prism of data is to borrow tarnished optics from exactly the creatures and systems [advertising tech] we don’t want in the room to start out with if we’re going to do this thing right.
Yet just when we begin to contemplate the miserable failure a market for personal data represents, it can go lower still. Where there are transactions in the world of finance, there is securitization — it is part and parcel of the financialization at Capitalism’s extreme. There is a nascent Political Action Committee (PAC) in the US dedicated to this assertion:
Every place that data is transacted, is an opportunity to index. And every index is an opportunity for securitization and risk management.
John Stuart Mill, the 19th Century British philosopher and political economist, worried that capitalism’s pressure to accumulate money could lead to a “tyranny of conformity”, a resignation that an improved system couldn’t be found. He feared the middle classes in the US and other countries had succumbed to such conformity, displaying:
… a general indifference to those kinds of knowledge and mental culture which cannot be immediately converted into pounds, shillings and pence.
And here we remain. Conforming. Compliant. Wondering if there might be a better way.
The securitization of personal data is wholly unlike any fabric of a co-operative and civilized society I recognise or wish to help build. Do we really want to reduce this fantastical digital facility to a question of data ownership and market participation? Seriously, is that the best we can do? We need to design a resistance to the siren calls of the market in favour of something more valuable.
Shouldn’t we raise questions of identity and agency and collective intelligence and ethics rather than securitization and markets? Should we not strive to offer and derive unquantifiable value in all combination of wonderful variety and purpose rather than construct a simplistic mechanism by which someone might package up the personal data equivalent of a collateralized debt obligation?
There might well be a market for personal data, just like there is, tragically, a market for live human organs, but that does not mean that we can or should give that market the blessing of legislation. One cannot monetise and subject a fundamental right to a simple commercial transaction, even if it is the individual concerned by the data who is a party to the transaction.
…
Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. … While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.
The venture capital mindset inevitably perceives a data marketplace in terms of providing economic incentive for the more efficient allocation of data. But efficiency is merely first order, transactional. Effectiveness is second order, transformational. That didn’t stop one VC firm running the numbers though, concluding that even the mighty Facebook only manages to make your account worth about twenty bucks per annum.
How much is that six cents a day worth to you? Is it sufficient remuneration to allow others to continue construction of their ‘data doubles’ of you, designed to answer their questions about you more efficiently to varying degrees of (in)accuracy, eroding your agency in the process?
No.
Interestingly, Professor Alex Pentland writes:
Social physics strongly suggest that the [Adam Smith’s] invisible hand is more due to trust, cooperation and robustness properties of the person-to-person network of exchanges than it is due to any magic in the workings of the market. If we want to have a fair, stable society, we need to look to the network of exchanges between people, and not to market competition.
Pentland has also mooted the potential for Google to cleave in two, with one part dedicated to providing a regulated bank-like service for data.
Personal data isn’t money as we know it. Simplistic. Transactional. Binary. It is more akin to the rich, varied and complex information flows present in rainforests, in oceans, in human cultures. It is worth a lot more than six cents a day when markets don’t reduce it and its application to a mere transactional exchange. At the very moment we might conceive an awesome, distributed and continuous crystallization of collective knowledge, the signalling known as price entails monumental information loss and inevitable consequential inequalities.
If anything, rather than reducing personal data to mere money at such great loss, we might elevate money.
Very few people realize that the nature of money has changed profoundly over the past three centuries, or that it has become a political instrument used to centralize power, concentrate wealth, and subvert popular government. … a fundamental change in the way we mediate the exchange of goods and services [is required to] shift from elite rule based on command and control hierarchies and military force to a more inclusive, participatory, just, harmonious, and sustainable order.
The End of Money and the Future of Civilization, Thomas Greco, 2009 (check your library)
Data-as-labour
Jaron Lanier describes the thinking behind data-as-labour:
It’s magic that you can upload a phrase in Spanish into the cloud services of companies like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in the great cloud server farms.
But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translation will yield a usable result.
… With each so-called automatic translation, the humans who were the sources of the data are inched away from the world of compensation and employment.
Jaron Lanier,Who Owns the Future? (check your library)
And given that labour is sold and bought in markets:
In the event that something a person says or does contributes even minutely to a database that allows, say, a machine language translation algorithm, or a market prediction algorithm, to perform a task, then a nanopayment, proportional both to the degree of contribution and the resultant value, will be due to the person. … A market economy should not just be about ‘businesses,’ but about everyone who contributes value.
To be clear, Lanier sees some folly here. Nevertheless, he takes it seriously enough to return to the topic in a recent jointly authored paper — Should We Treat Data as Labor? — framed in terms of popular concern for employment and income distribution in this digital age.
It’s distinguished from the current model:
This work’s value is in the questions it asks, but not in its answers. Here’s what’s wrong.
Data-as-labour is effectively a rebadging of data as property — one owns one’s labour after all. Labour is remunerated based on time and/or output at a market value. Here, the value is contingent on factors way beyond any conceptualization of work that I know. The shoehorning simply serves to connect the treatment of personal and similar data to concerns for the impact on employment of all variety of digital technologies, not least artificial intelligence. That might be honourable, but it’s also tenuous at best.
Moreover, it seems I must privatise the value of my data if others privatise the value derived thereof. The authors justify this on the basis that artificial intelligence is centralized, but it need not be of course. They also assume that price remains the best way to signal and derive value in this context, yet perhaps we should test other options before reaching such a conclusion.
Finally, while the authors acknowledge an inevitable decline in value (after a while, yet another human translation adds next to nothing to the accuracy of the cloud translation service), they don’t acknowledge that Deepmind’s AlphaGo Zero has been teaching itself without human-originated data.
The question remains, is there a better way to frame “personal data”?
The third and final post explores the conceptualisations of data-as-reputation, data-as-public-good, and data-as-me, before pointing to some architectural principles for a new direction suggested in the first post of this series — interpersonal data.
This post, the second in a series of three, was first published to the AKASHA blog, and is also on my personal blog.
Image 1 by Ilya lix. Image 2 by Art by Lønfeldt.