The Deluge

kantbot
7 min readMar 3, 2021

Solomon saith, There is no new thing upon the earth. So that as Plato had an imagination, That all knowledge was but remembrance; so Solomon giveth his sentence, That all novelty is but oblivion. Whereby you may see, that the river of Lethe runneth as well above ground as below. Certain it is, that the matter is in a perpetual flux, and never at a stay. The great winding-sheets, that bury all things in oblivion, are but two; deluges and earthquakes.

— Francis Bacon

It’s a conspicuous feature of our time that we accept, all too readily, the resolution of life into data.

We have, in only a few decades time, been sufficiently primed by contemporary views of science and technology to think of everything we experience as nothing more than an aggregation of raw, unfiltered corpuscles of one dimensional, empirically quantifiable states of computer friendly information. “Your data,” we hear people say, “is being collected,” or “is stored in the cloud,” or “is being sold to advertisers,” and despite how conscious we’ve become of the manner in which “our data” is extracted from us and exploited by every organization we transact with in the course of our day to day lives, we still seem, on the whole, unsure of what to even make of this phenomenon.

Naturally, the idea that some other party is taking what’s ours, and using it to their own ends without consulting or compensating us is enough to invoke in us a feeling of having been violated. Yet at the same time, it’s difficult for us to conceptualize how much of the information collected about us could even be used to any purpose whatsoever. Culturally we draw on models of espionage and surveillance, and have seemingly few other analogous frames of reference to understand what’s happening, and accordingly we can’t help but shudder at the thought of being watched and tracked and spied upon. Still though, what concrete harm is actually done to us, to either our physical being or spiritual wellbeing, remains elusive upon continued consideration of the matter, and downloading a we browser with nebulously defined privacy protection is very nearly enough to placate us entirely.

The issue is a strange conflation of legal and metaphysical concern. Our data is our property, isn’t it? We create it, it stands to reason it should be ours. But is that really the case? Data doesn’t exist except as the product of the act of measurement, and observing that, is it we who create our data, or the corporate trackers doing all the measuring? At the same time we’ve tacitly accepted this idea that who we are can be expressed as a matrix of data points representing every aspect of our self. Your life in this case is a collection of accounts who’ve accepted your friend requests or followed your timeline. A folder of jpg images of your experiences. A playlist of your favorite music. The pulse rate monitored by your fitness tracker. Your date of birth. Your email. Your phone number for verification. These are the things human beings are made up of. Or so we’re allowed to think. But for my own part, I remain skeptical.

In 2018 a video was leaked, of an attempt by one of their teams to visualize the future of this quandary, to think ahead to the possible uses of data in the years to come.

The whole conceit of what I’ve been describing is summed up nicely here. Humans generate data. User’s Google accounts aggregate that data. That data becomes more than a representation of ourselves at a given point in time, but is transformed into a simulacrum. A Doppelganger. An other Us. Here we are as a kind of angel of data in the clouds up above, bisected by the algorithm, our profile’s guts splayed open for all to see, all our constituent data dripping like so much gore from our disemboweled innards and staining the clouds upon which we float red.

This voodoo doll of us can be used by the benevolent forces behind Google to manipulate our physical and actual selves. To engineer our behavior so that we create new or recalibrated data to store in our angelic carcass up above. And so we would be sculpted in conformity to the data Google would like us to generate, our lives being subtly altered in the process, bit by bit, to nudge us towards the finishing line of perfection.

There’s something about such grandiose visions that never leaves one feeling quite right. When considered alternatively is the real issue facing us the clockwork control of our data, and therefore control over who we are on the most fundamental level of our soul? Or is it not rather that we generate so much data today that it’s making it increasingly impractical for anyone to control it all?

It is believed that over 2.5 quintillion bytes (2.5 e+9 GB) of the data is created every day, and this number is increasing

I asked Google to put a number to the amount of information being continuously created every day and this is what it reported back, what this number even really represents is a matter open to speculation. The dominant paradigm today is that of Data Agnosticism, where every byte is interchangeable with every other byte, and where it’s only in aggregate that the information can be used for any purpose whatever it may be. Of that 2.5 quintillion bytes, what percentage of that is organic? How much is just noise, or the product of automated processes creating and exchanging keys and tokens? Is the information the pharmacy collects about my purchases of dental hygiene products routed back through the system? Is there a Google algorithm somewhere calculating how much floss I ought to be using, and how often I should be needing to buy more? Has Google signed strategic contracts with all of the major pharmacies to develop a complete profile on shoppers and ensure a floss purchase at CVS isn’t missed when one is expected at Rite aid instead?

The data being collected is selected indiscriminately for a disjointed complex of reasons, by different economic agents observing us for oftentimes very mundane reasons. Does this information add up to a person? The artificially intelligent, deep-learning neural nets deployed by big tech lack the capacity for hermeneutical discipline. Such programs do not actively evaluate you, in the sense of attributing the outward gesticulations of your limbs and contents of your credit card bill to a self-sufficient consciousness in which your existence is actually rooted; the negativity of your interiority, its function as space into which thought is projected, is something machine interpretation can never grasp.

On account of this limitation, no observational machine can ever be said to understand anything, in conceptual terms. Instead a course of infinite externalization must be pursued, to whatever lengths that might end up taking you. Here a human being is approached like some non-linear natural phenomenon, like some yet unmodeled pulsar, who’s seemingly irregular eruptions of action can, by means of enough observation, be reduced to a descriptive function of a roller coaster curve through Cartesian space.

That adjustments here and there of the magnitude a particular variable assumes at a specific point in this self-modulating sequence of meaningless natural events can, by working backwards, affect change in the individual being modeled is something that simply doesn’t hold up to continued scrutiny. There is no physical mechanism capable of exerting the will of the model in reverse, back onto the modeled. And if an individual is so changed, it’s safe to assume it isn’t the end result of some supersensible force manifesting change within reality in a manner transcending Newtonian mechanicsー No, as has always been the case with hypnotism and advertising such a transformation could only be brought about through the willingness of the subject to be thusly transformed.

Google and other tech firms exert control not actively, and require us to conform to the parameters of their models from choice, whether conscious or unconscious. In practice this means behaving in such a way as produces the data points they expect, or wish to see. For us to produce data we must necessarily confine ourselves to those spheres of activity Google actively measures behavior in. To measure our behavior Google must have some instrument deployed within that sphere through which to interface with us and our actions. These, it should go without saying at this point, are the company’s smorgasbord of apps and services, which Google also designs the UI/UX for which we’re required to use if we want to benefit from the service on offer.

Thus, we must act as they have made accommodation for us to act, in a manner which forces us to produce the species and volume of data they desire. The data is, perhaps more than anything else, simply a byproduct of a hermetically sealed loop of user and service and provider, acting only as the feedback mechanism each cycle of that loop triggers as signal of another round of cybernetic copulation between parasite and host. These however many quintillion bytes of data doesn’t record the wisdom of the ages, but is produced for the sake of being produced, as the cybernetic spawn of unholy union, multiplying into infinity, piling into cascading mountains of meaningless figures whose half-life of usefulness requires an atomic clock to accurately record.

Here then we come to the true problematic of modern big data society, it’s not the threat of control, but the loss of it. Not about actors having too much information, but about what relatively little information they require being buried under heaps of infotrash. The problem today is one of a waste management. Of recycling. Of toxic information disposal. Or quarantine. Of invasive information. Of epistemic domain collapse. The ecology of knowledge. The destruction of the arboreal architecture of schematized learning.

The pressing need of projects today is not to further decentralize information production. To facilitate continued divergence and multiplication. But to inculcate coherence and restore the subjective unity required to structure the information we collect so as to render it useful and actionable by users.

--

--