Digital Dust

Jay Owens
27 min readJun 24, 2018

I write a newsletter about dust called Disturbances. Dust might appear to be the smallest and least important thing in the world. But it turns out to be vast and to contain multitudes. This is essay #16.

Dust would seem to be the most material of things, sometimes the most ultimately material: it’s what is left of an object when all form, structure, context and legibility are stripped away — when the object is destroyed, and only the fact of its materiality remains.

Dust would seem, therefore, to be the antithesis of the digital, the opposite of its binary 0s and 1s. Digital means data, virtual and immaterial; it’s black and white, crisply demarcated, perfectly defined. Dust is grey, and deeply, existentially fuzzy.

So obviously I’m going to argue that the digital is dusty as hell.

This argument has three parts: first, the desire for the digital to become dust in the form of sensor devices shrinking to a tiny, dusty scale. A fantasy: of the digital eating the world — and the body — and becoming an omnipresent and seamless part of the environment, as diffuse as dust motes carried in the air.

Second, the problem of the digital gathering dust, and aging and disintegrating over time. The fact that decay and lossiness are as integral to the digital archive as the traditional analogue, paper one — and in fact happen rather faster. Perhaps it’s a “becoming dust” of another sort, then: a becoming formless; a loss of information.

Part three, on what does it mean to store and archive information anyway?

En route we visit Bruce Sterling, Derrida and my teenage Open Diary. Let’s go.

Part 1. Becoming Dust: the fantasy of digital sensors everywhere

1.1 Smart Dust

‘Smart Dust’ was named in 1997.

“It was kind of a joke,” said inventor Kris Pister in an interview last year. “Everything in the US and LA at that time seemed to be “smart”: smart bombs, smart houses, smart roads.”

But his rationale was sound. “As a grad student at Berkeley in 1992 it was clear to me that Moore’s Law, the communication revolution, and MEMS technology [microelectromechanical systems] were all driving in the same direction: the size, power, and cost of a wireless sensor node were riding exponential curves down to zero,” he said.

Pister’s research group at the University of California, Berkeley envisioned a programmable electronic sensor, able to communicate with the outside world without wires, at the scale of a single cubic millimeter. The sensor might collect light data, or temperature, vibration, sound, magnetism or wind shear. It would be discreet in size, low in cost, and reporting back regularly to its control system.


Smart dust would allow the world to be measured — and thus known — as never before. As such, it would change it.

Pister outlined a vision of a world transformed by dust:

“In 2010 everything you own that is worth more than a few dollars will
know that it’s yours, and you’ll be able to find it whenever you want
it. Stealing cars, furniture, stereos, or other valuables will be
unusual, because any of your valuables that leave your house will check in
on their way out the door, and scream like a troll’s magic purse if
removed without permission (they may scream at 2.4 GHz rather than
in audio).

In 2010 a speck of dust on each of your fingernails will continuously
transmit fingertip motion to your computer. Your computer will understand
when you type, point, click, gesture, sculpt, or play air guitar. […]

In 2020 there will be no unanticipated illness. Chronic sensor implants
will monitor all of the major circulator systems in the human body,
and provide you with early warning of an impending flu, or save your
life by catching cancer early enough that it can be completely removed surgically.

In 2010 MEMS sensors will be everywhere, and sensing virtually everything.
Scavenging power from sunlight, vibration, thermal gradients, and background
RF, sensors motes will be immortal, completely self contained, single chip
computers with sensing, communication, and power supply built in.
Entirely solid state, and with no natural decay processes, they may
well survive the human race. Descendants of dolphins may mine them
from arctic ice and marvel at the extinct technology.”

Source: Kris Pister,

Descendants of dolphins… Damn.

I’ve written before about how the Greenland ice sheet contains a chronological record of industrial capitalism, recorded in the particulate pollution from furnaces and factories that was locked in each season’s layer of snow. Now we might have to consider the possibility that it’ll contain the smart dust of the information age, too.

One small problem: 2010 was eight years ago and none of that’s happened yet.

Interest in the term “smart dust” has declined every year since 2004. The concept has flitted in and out of the Gartner Hype Cycle for Emerging Technologies — first appearing in 2003, then 2013, and 2016 — always in the most speculative category: an “innovation trigger”, more than 10 years out from mainstream.

Gartner Hype Cycle for Emerging Technologies, 2016

So why isn’t smart dust pervasive?

The point of the Berkley team’s work was actually always to explore the limitations of microfabrication technology: how small would they be able to make these motes? They’ve made exponential progress in the last fifteen years, making the hardware one to two orders of magnitude more power-efficient, and reducing the power required for the radio’s active period by two to three orders of magnitude as well. To avoid smart dust motes becoming just a bunch of tiny little dead batteries, they’re now typically powered with solar cells so they can live off the environment somewhat indefinitely.

The biggest challenge is that communication is expensive: transmitting one bit of information to the outside world ‘costs’ the same amount of energy as 100,000 CPU operations, says Prabal Dutta, a researcher at the University of Michigan in Ann Arbor. Keeping a smart dust mote constantly ‘awake’, monitoring and transmitting, requires a larger solar cell and so it isn’t very micro; a smart dust mote that only wakes up and samples occasionally, however, thereby isn’t very smart.

One smart thing researchers have done is put some deep learning on board. The “Michigan micromote” team found a way to reduce the power consumption of a deep-learning processor down from 50 milliwatts to only 288 microwatts, by redesigning the chip architecture to minimize data movement. This enables the smart mote to do more data processing on board — e.g. to determine if the image it’s seeing is actually a burglar or just the cat — and so waste less energy sending footage to the cloud for analysis.

Nonetheless the power costs of communication mean that motes remain closely tethered to their data receivers, though Michigan have brought that range up from 50cm to 20m. This leaves most of DARPA’s military fantasies for the technology some way away.

And, most crucially, smart dust remains about a cubic millimetre in size — which is to say, it’s not dust. In fact, it’s still a thousand times too big. (Dust, being made of many things, comes in many sizes — but 1 to 100 micrometres, i.e. thousandths of a millimetre, is a workable rule of thumb.)

The Michigan Micro Mote, hanging out on the edge of a nickel. (Source: the team’s Flickr)

Wireless connectivity expert Nick Hunn says, “I’m happy to be proven wrong, but 15 years on, I don’t see it [Smart Dust] taking over the world.”

Nonetheless, another team at Berkeley is trying to get smart dust inside the brain.

1.2 Neural Dust

The transhumanists fantasise about becoming beings of pure data, untethered from the vulgar mortality of the body.

But translating the mind into digital data is hard. We’ve known for a century that the brain is powered by electricity, ever since German scientist Hans Berger placed electrodes on a person’s scalp and recorded the first electroencephalogram (EEG). (Nobody believed his findings until they were replicated by another scientist ten years later, due to the fact he had no knowledge of mechanics or electricity, and thought he was measuring ‘psychic energy’.)

But turning neural activity into machine-readable information remains full of trade-offs. EEGs are low-resolution, high noise, and can only see the ‘surface’ of the brain — meanwhile, the subject must wear electrodes & have their head covered in goo. Functional magnetic resonance imaging (fMRI) and other scanning technologies require one-ton magnets & shielded rooms, limiting casual use. And invasive devices within the skull may produce the highest quality signals, but at the cost of scar-tissue build up and risk of infection. The lifespan of the device ranges from a few months up to a couple of years: not long for something requiring dangerously invasive surgery to implant. ‘Deep brain stimulation’ devices are already being used to help assuage the symptoms of epilepsy and Parkinson’s, but neurosurgeons are unwilling to countenance cutting open a healthy brain.

So a proper neural computing interface remains a bit science fiction. Needless to say it’s one of the things Elon Musk is working on, with his company Neuralink reportedly working on ‘neural lace’, a mesh of tiny implanted electrodes injected into the grey matter and providing a direct cortical interface — helping Elon fend off the AI takeover he fears, by merging into one of them.


‘Neural dust’ is like ‘neural lace’ but without the mesh: just sensors, scattered inside the skull.

In a speculative 2013 paper, a research team led by UCal Berkeley computer engineer Michel Maharbiz proposed ‘neural dust’ as a solution for “chronic brain-machine interfaces”, viable for a lifetime. The system would require: “1) thousands of 10–100 micrometer scale, free-floating, independent sensor nodes, or neural dust, that detect and report local extracellular electrophysiological data, and 2) a sub-cranial interrogator that establishes power and communication links with the neural dust.”

With implants as small as 50 microns, neural dust would be operating on a scale comparable to neurons. Thing is, we know smart dust hasn’t got that small yet. It’s stuck around the millimetre scale, and neural dust is too.

In 2016, Maharbiz’s team implanted a test dust mote 0.8 x 1 x 3 mm in size, about the dimensions of a grain of rice, inside an anaesthetised rat. Electrical stimulation was applied to the rat’s foot, in the hope that the mote could measure neural activity in the rat’s sciatic nerve and leg muscle in reaction. It worked.

Here is the sensor, hanging out inside a rat (Photo: Ryan Neely)

The innovation over existing smart dust is to use ultrasound for both power and communication. A ‘chronic’, lifelong implant needs an undying power source, but the solar cells used in outdoor smart dust are hardly viable here. Nuclear isotopes inside the brain might also be unwise. Instead, a piezoelectric crystal is able to convert high frequency ultrasound into electricity, which powers the electrical transistor which senses the electrical activity in the nerve. And ultrasound is also how information is exported outside the brain, too, as the transducer alternates between sending ultrasound pulses to power the mote and listening for the echo as these pulses bounce back. (See Eliza Strickland at IEEE for the full description.)

There are still problems — size, obviously. Motes are at present fifty times bigger than the 50 micron scale needed to fit inside the brain and central nervous system. Immune rejection and infection. The fact that ultrasound doesn’t go through the skull very well, making neural dust difficult to use for epileptic treatments. But the future of neural dust may not, in fact, be neural:

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

Dust may never be implanted in healthy brains — the risks of this none-more-invasive procedure are too high. David Eagleman, a neuroscientist at Stanford University, suggests it’s much more likely that methods for seeing and changing the brain from the outside will improve instead, whether through fMRI, transcranial magnetic stimulation, or genetic modification of neurons. Meanwhile, at the 1mm scale, dust motes can already speak to the peripheral nervous system, for use cases such as bladder control or appetite suppression. Maharbiz sees them implanted throughout the body, reporting on cancerous tumours and the impact of therapies.

In 2016, Scientific American called neural dust “a Fitbit for the nervous system”. Recent research suggests, however, that Fitbits and other activity tracking devices don’t necessarily work as expected. A continuous stream of body data proves to be unexpectedly poor motivation for behaviour change. And people who’ve consciously bought the device for a purpose nonetheless often stop using it inside six months, in what is perhaps some kind of resistance to the discipline the device seeks to impose on them. Bariatric surgery can also produce resistance if underlying causes of emotional eating aren’t treated first. Patients find their addictions change form or their eating disorders continue and mutate —again, behaviour change proves to be complex than foreseen.

If monitoring gets more tiny and neural-dusty, what new intimate self-sabotages will be fostered? I have a vision of us as cyborgs at war with ourselves.

Part 2. Gathering dust: data, media and loss

“The truth is that computation has, from the very start, been built to rot.”

— Bruce Sterling

Bruce Sterling was on it in 2004:

“”Bits”, digital ones and zeros, are not numbers or Platonic abstractions. They are physically real and subject to entropy, just like leaky plumbing. Bits are electrons moving through circuits, or photons in a fibre-optic pipe. Bits are laser burn marks in plastic, or iron filings stuck together with tape. Those are the weird stopgaps that we are using for heritage.

The digital computer is about as old as I am yet it does not have, and has never had, any archival medium”

— ‘Delete Our Cultural Heritage’, Bruce Sterling, The Telegraph, 12 June 2004

Every digital storage medium decays. Solid-state drives & flash memory see electrical charges leak away due to imperfect insulation. Hard drives & floppy disks are magnetic media, and thereby decay as bits lose their magnetic orientation. CDs & DVDs are made of polycarbonate plastic with thin coatings of aluminium and acrylic, which may break down, the alumninium oxidise. Old-school paper media, punchcards & punch tape, may quite literally rot — or also get eaten by things.

‘Synthesising Obama’. Artist & designer Tobias Revell laser-etched the Python code & supporting libraries required to synthesise fake video of the former US president in 5pt type on 48 sheets of 20X30cm black acrylic so that it can be stored — he hopes — for 40,000 years. (Source:

Older media storage technologies turn out to be less fragile: printing out documents on high-quality acid-free paper and using the right inks will almost certainly enable them to last longer than any digital format. ‘Permanent paper’ (ISO 9706) is expected to last several hundred years under library or archive storage conditions; ‘archival paper’ (ISO 11108), the highest grade — made out of pure cotton or linen cellulose, with none of the acidic lignin of wood pulp — may store information for a millennium.

“Paper as the medium for the world’s memory has one great advantage,” Stanford digital preservationist David S.H. Rosenthal wrote in 2012. “[I]t survives benign neglect well. Bits, on the other hand, need continual care, and thus a continual flow of money.”

In 2017, the UK Parliament voted to stop printing new laws on calfskin vellum, ending five hundred years of tradition.

But there’s another kind of decay that’s purely informational, too.

A few weeks ago, a link was doing the rounds on Twitter, enabling you to see your timeline back as it would have looked in 2008. This produced much exclaiming at the strange naive flatness of it all: because back then everything wasn’t always hyperreferential and simultaneously a meme, an in-joke and a subtweet. Instead, there was quite a lot of actually telling people what you were doing right now.

But the other thing that stood out to me was that all the links looked different:

See your olde Twitter timeline here.

On first glance I thought all these links were dead: today’s Twitter wasn’t automatically hyperlinking them, and I knew URL shorteners were a fragile technology.

In 2009, Joshua Schachter (founder of, a bookmarking site) blogged about how shorteners were bad for the ecosystem of the web as a whole, adding a layer of indirection, slowing down sites, and creating opportunities for spam and phishing attacks. A 2015 report from security service Cloudmark found that 97% of links led to “malicious websites”. And the shortening services in use kept changing: until 2009, Twitter used Tinyurl to shorten hyperlinks over 26 characters, when it switched to, then switched again in June 2011 to its own domain.

Yet surprisingly, many of these URL-shortening services have held up — something commentators didn’t necessarily expect at the time, as the business model seemed obscure and sites like (temporarily) shuttered. But Bitly found a way to sell content tracking and marketing measurement to the tune of $12m annual revenue, is backed up by Hootsuite, and TinyURL staggers on.

So these micro-links haven’t turned to dust quite yet — surprisingly enough. But give them another decade and will any of them still work? Will the Internet Archive go to the efforts of collecting and preserving them, perhaps? Or will they rot — and the Twitter archive be rendered obscure again, as all the links, already un-clickable, become just another bunch of nonsense syllables?

Links rot in other ways too — in the process of looking back at 2008 Twitter, I see a tweet from decade-ago-Scott Martin, thinking about digital obsolescence too:

Scott’s URL shortened link redirects to, website of the Digital Object Identifier Foundation dedicated to providing “technical and social infrastructure for the registration and use of persistent interoperable identifiers, called DOIs, for use on digital networks” …which gives a page-not-found error, nonetheless.

Websites reorganise their content and don’t redirect; websites die. One recent analysis of 200 top marketing websites suggests the average business website’s lifespan is 2 years and 7 months — meanwhile, individual pages live only a matter of days, though no-one can agree how many (and no-one seems to have studied it in a while).

44 days, said Brewster Kahle in a 1997 special report in Scientific American, announcing the launch of the Internet Archive.

75 days, said another study from a team at Alexa Internet (owned by Amazon) — Lawrence et. al (2001), in Computer, 34(2):26–31 — but I ought to use the DOI link instead:, because that’s meant never to change.

100 days, said Rick Weiss in The Washington Post in 2003 — though the article, ‘On the Web, Research Work Proves Ephemeral’ is, itself, no longer available. So I hear from other sources, anyway — for me that Washington Post link redirects to a subscription paywall and a GDPR consent form; that is to say, business models also render links inaccessible, too, too.

Websites don’t just close, or get abandoned — they’re also deliberately killed.

In April 2014, Buzzfeed attempted to quietly take down nearly 5,000 old posts which it decided no longer met its editorial standards. The Gothamist and DNAInfo sites both went (temporarily) offline when they were shut by owner Joe Ricketts after workforces voted to unionise — panicking journalists, who thought they’d lost years of work. And there were fears that the same would happen to Gawker, after Peter Thiel — who had sued the site into the ground — submitted a bid to buy it in January 2018.

The Writers Guild of America helped lobby to keep the Gothamist sites up; the Freedom of the Press Foundation launched an online news archive to help preserve online content “we deem to be especially vulnerable to the ‘billionaire problem’” — that is, wealthy buyers manipulating or deleting the database. Nonetheless, these incidents demonstrate how easy it is for gaps to be created in the digital public record (as news journalism is meant to be) — gaps that should be suspicious, like too-clean spaces on a dusty shelf marking where something precious has been taken away. But it’s hard to see an absence. And more will no doubt be made.

Intuitively, though, I think we do expect smaller, more personal media outside the public sphere to be lost to time — or, more precisely, to achieve a kind of privacy through obscurity. We change email addresses, stop using social networks, or don’t scroll that many pages back in the search bar. And so the past is sloughed off, gently, quietly, out of sight and out of mind — to us, at least, though not the platforms that host this data. When automated recommendation systems bring up old posts — such as those occasions where Facebook’s ‘On This Day’ reaches ten years or more back into the past, can be jarring, as if something has risen up that shouldn’t.

I got a couple of emails in the last month or two that had this air of the digital uncanny:

10 March:Your LiveJournal account was deleted”
“We noticed that your journal XXXX has less than three entries and hasn’t
been logged into for over two years. LiveJournal is deleting inactive empty
accounts. Pursuant to our housekeeping policy, your LiveJournal account
XXXX was recently deleted and will be purged in 3 days.”

13 April: “Open Diary is back”
“Open Diary has been re-launched! We have had over 5,000 ex-members re-join us in the last few weeks — if you’re not one of them, we’d love to have you back! Your diary is still in the system, but cannot be accessed or seen by anybody unless you reclaim it”

I believed I’d deleted all these accounts, and even my old usernames had slipped quietly from the fringes of my remembering. Perhaps a dozen people remember me from back then. I take particular care never to mention my old handles on my current social media accounts — I have not always been @hautepop — as a deliberate defence against future doxxings or harassment. Nonetheless. It seemed a few fragments had persisted, after all. With these technologies I was a social media early adopter (the only time I have ever been), and so I suppose I have been dealing longer than most people, too, with the question of what to do with these records, as well.

What do you do with your teenage diaries, with these traces of writing oneself into being? I mostly want to nuke them into orbit, of course — the horror of these pages is inimitable, a cocktail of equal parts discomfort at how awful I felt then and, oh ego blow! The fact that the writing is not good. As an adult, I wonder whether to preserve these pages exactly for the fact that it’s usefully humbling; I might also cultivate compassion for my dreadful teenage self? But mostly I have chosen to delete. I would rather write myself anew.

Yet I’m also reminded of another reaction I had at the time, or shortly after — age 18 or age 21, I’m not sure. That reaction was to archive.

In amid the archetypal teenage woes of ambition and future-anxiety and sexuality and friendship (Christ, so little changes!), I had a sense that one particular story I’d lived was a little stranger and more interesting than that. It seemed worth saving. I felt that that one day I would want to write about it.

Even then, in my early-mid teens in the late Nineties, back when the web was new, I had a fascination with the pre-Worldwide Web internet that had just gone before me: The Well, the Jargon File, the Principia Discordia. MIT computer lab pranks. The first flame wars, the first trolls, alt.gothic Special Forces as the first community managers. The myths and morality tales of this new space, the stories (A Rape In Cyberspace; Indra Sinha’s autobiography of years lost to LambdaMOOCybergypsies’).

The timbre of this obsession was anthropological and genealogical: here was a new kind of culture in the making; here were the roots and origins of the web I was seeing growing around me daily. It was in this way that the story I wanted to save seemed to have an interest beyond myself and into the future, too. It seemed to capture something about the strange new ways of relating these social technologies enabled, the anonymous intimacies these diary formats afforded, the fragile lines between truth, self-creation, and fiction. (I published an essay I called “Post-Authenticity” a couple of months ago so I clearly haven’t gotten over these themes yet.)

So yes, age 18 or so, I used a tool called LJ Archive to download everything in an act of preservation for the future. I saved IM conversations, I saved emails. I don’t know whether the files are on my harddrive, but I think I’ve got them on Dropbox. Yet the LJ Archive utility stopped being updated in 2013, and it seems it might not run on Windows 10. Digital decay.

I need to write something with this material fairly soon, I guess, otherwise it’s just dust.

Part 3. The digital archive is just as dusty as the old one

3.1 Archive fever

“The archive always works, and a priori, against itself.”

— Jacques Derrida

I have, so far, been talking uncritically about the notion of ‘the archive’. But it’s an interesting space, and one worth exploring a little further.

I want to enter it through the French philosopher Jacques Derrida, because the line of argument that I began this newsletter with — this inherent entanglement of the black and white, 0s and 1s absoluteness of the digital, and the grey, fuzzy, decayingness of dust — is surely a post-structuralist one. That is, rather than claiming the two concepts relate to each other only in a binary opposition, I’m arguing that, under scrutiny, digital collapses into its opposite, its apparent logic an artifice, “contradictory, incoherent, a ‘mythology of presence’.” (Gregory Castle, 2007). This is, crudely, a deconstructivist thing to do, this thing we might call ‘reading a concept against itself’, and Derrida’s the architect of this mode of thought.

Besides, he wrote about archives: a lecture turned essay, ‘Archive Fever’, 1994 (PDF). An essay which is nominally about media — “a major statement on the pervasive impact of electronic media,” according to the cover blurb, in fact — making it fitting for this particular argument. And yet more fittingly, someone else — historian Carolyn Steedman — has written about Derrida’s ‘Archive Fever’ in a book called Dust: The Archive and Cultural History (2001). So I think there’s something to be found, somewhere in here — or less “found” than “gestured at” through this scattershot reading of an oblique analysis of an oblique text that’s not really about media at all, so much as an essay about one Sephardic Jewish historian’s reading of Freud… It might help square this circle between the ever-increasing power of digital measurement and knowing, and the dusty inevitability of loss and forgetting.

Also fuck it: this is an essay about dust. It’s monomaniacal, sure — but not exactly constrained by any requirement to be direct.

So. Had there been “MCI or ATT telephonic credit cards, portable tape recorders, computers, printers, faxes, televisions, teleconferences, and above all E-mail,” says Derrida, the entire history of psychoanalysis would have been different. It would not have only changed how the history of psychoanalysis was preserved and understood, but the nature and meaning of the field itself, down to the moment of encounter on the analyst’s couch and perhaps even the nature of the subject who enters, lays down and speaks. Psychoanalysis itself is mostly a matter of remembering and communicating, after all (and the failures and gaps that ensue). How could media with those same functions, memory and communication, not affect the discipline differently in some ways?

“To put it more trivially: what is no longer archived in the same way is no longer lived in the same way. Archivable meaning is also and in advance codetermined by the structure that archives.”
- Derrida 1995, p. 18

A view of part of Jacques Derrida’s library in his home in Ris Orangis. Photo: Andrew Bush, 2001, via Princeton University Library

In particular, Derrida claimed, “electronic mail today, and even more than the fax, is on the way to transforming the entire public and private space of humanity, and first of all the limit between the private, the secret, and the public or phenomenal.” The word ‘archive’ originally came from the Greek arkheion: the private house of the magistrates (archons) who commanded, where official documents were stored in reflection of the archons’ power to make and represent the law. But media technologies change the social structure of the archive — they allow individuals to create archives themselves and control information, with implications, Freud believes, for what is public & what is private, what is secret & what is not; who has rights over access, publication & reproduction; property rights; what belongs to the realms of the family and what is the state’s. Questions of law, on where the boundaries of the inviolable are drawn.

Derrida doesn’t continue to write about email, though, but rather looks back to the proto-media of the ‘mystic writing pad’ referred to by Freud: “‘a slab of dark brown resin or wax with a paper edging’ over which ‘is laid a thin transparent sheet’. …The mystic pad caught Freud’s attention because it combined the permanence of ink on paper with the transience of chalk on slate, and so enabled both the recording and rewriting of data.” (Mambrol 2018). Used to record thought, “it prepares the idea of a psychic archive distinct from spontaneous memory”; that is, it “integrates the necessity, inside the psyche itself, of a certain outside” (Derrida).


“if there is no archive without consignation in an external place which assures the possibility of memorization, of repetition, of reproduction, or of reimpression, then we must also remember that repetition itself, the logic of repetition, indeed the repetition compulsion, remains, according to Freud, indissociable from the death drive. And thus from destruction. Consequence: right on what permits and conditions archivization, we will never find anything other than what exposes to destruction, in truth what menaces with destruction introducing, a priori, forgetfulness and the archiviolithic into the heart of the monument. Into the “by heart” itself. The archive always works, and a priori, against itself.”
— Derrida 1995, p.14

[The death drive: “an urge in organic life to restore an earlier state of things”, which leads people to repeat traumatic events, despite this being against usual instincts of pleasure & self-preservation. Here I am considerably less interested in whether this idea is psychically correct or not — or even if I understand it wholly, frankly — than whether it is interesting to think with. Roll with it for a moment, yes? ‘Archiviolithic’: archive-violating, i.e. destructive.]

A crude translation or two, if the scholars will forgive me (or just tune out their ears for a paragraph or so): perhaps the urge to archive, to store and repeat the past, comes with an edge of morbidity — because to endlessly repeat the past is not to live and be present at all. Or perhaps the urge to archive comes from the terror of forgetting, and in that way the archive is haunted by the loss and decay that it is defined by trying to escape. (“There would indeed be no archive desire without the radical finitude, without the possibility of a forgetfulness”.)

This contradictory desire Derrida terms ‘archive fever’ or ‘mal d’archive’. It is a kind of horror. Conservation is, after all, spatio-temporally limited to that which is stored. The destruction drive, though, “is in-finite, it sweeps away the logic of finitude and the simple factual limits.” Consequently, “Such an abuse opens the ethico-political dimension of the problem. There is not one mal d’archive, one limit or one suffering of memory among others: enlisting the in-finite, archive fever verges on radical Evil.”

It’s a good thing we’re thinking about something so vast with dust, then, a material which is literally from the dawn of time.

3.2 Big data: the dream of the end of forgetting

Derrida wrote thirty years after Moore’s Law — the prediction that overall computer processing power will double every two years — but I don’t believe he mentioned it. Besides, what’s of more interest here is the analogous improvement in computer memory. A 1997 special report in Scientific American announcing the launch of the Internet Archive noted how “a snapshot of all parts of the Web freely and technically accessible to us […] will measure perhaps as much as two trillion bytes (two terabytes) of data” — that is, it could be saved on a £50 external harddrive today.

In the last twenty years, a new discourse of infinite remembering has grown in seeming opposition to the death drive: Big Data. An ideology in which the sheer size of the dataset itself is seen as determining the value and potential insight. Data is framed as “the new oil”: a new natural resource (such mystification of the conditions of its production!) that will fuel a ‘fourth industrial revolution’ blurring the lines between physical, digital, and biological spheres. As such, rational economic actors believe they must seek to capture as much of it as possible. Businesses find themselves with vast, unstructured ‘data lakes’ which are murky and fairly unreadable; not necessarily of value now, but stored in expectation of the prospect of future divination by that second horseman of the technological hype cycle, AI and machine learning.

The development of vaster and humanly-unreadable quantities of data has been paralleled by the development of a new sort of reader: machine learning algorithms.

Yet one of the biggest technical hurdles in machine learning is, strange though it sounds, a lack of data — specifically a lack of coded, tagged, high quality data to train analytics models. Machine learning algorithms and neural networks operate through iteratively building a model of the underlying distribution of the dataset they are being trained on, which is then used to make predictions. The challenge is that any training dataset no matter the size will barely scratch the surface of the total population it is meant to represent — while the algorithm also needs to optimise a complex multi-dimensional array of variables. Hundreds of thousands or millions of pieces of training data are therefore required to attain statistical validity.

Yet the number of well-coded open source datasets this size is surprisingly small, and they’re typically fitted to the concerns of the specific research group that built them. Artificially-generated data can suffice for some domains (e.g. character recognition) but not for others (faces). And the act of data coding is non-trivial, as well. What objects or concepts matter to tag — and which ones don’t? Do you label parts of a face separately (ear, eye, left nostril)? How do you handle concepts such as emotion — where even you the data scientist aren’t always sure how to code, and another person may give a different answer? What happens when you send your travel photo dataset for labellling through Amazon Mechanical Turk, paying pennies per image, where the only people poor enough to accept those rates will likely have very different perceptions of ‘luxury’, or ‘adventure’, or ‘romantic’?

Your training dataset is gap-riddled and inherently biased. It’s lossy. Unsupervised learning methods exist too — deep belief nets, generative adversarial networks — which can find patterns or structure in unlabelled data. But the patterns they find are low-dimensional, and also rather challengingly unverifiable, given the primary data is unlabelled. That is, it’s still messy.

The outputs of machine learning algorithms often acknowledge this in some ways, because they are probabilistic. They cannot specify their blind spots, they don’t know what they don’t know — but they do give confidence estimates for their ability to classify each piece of content (with lower confidence in less familiar items), and express these classifications in probability terms (this meme is 70% likely to contain a cat, 15% a doge).

That is, the outputs of this digital analysis might be described as grey-scale.

That’s before we consider the ways in which forgetting is a cutting edge area of machine learning research. Natalie Fratto outlines three: Long Short-Term Memory networks, Elastic Weight Consolidation, and Bottleneck Theory. Each are in some way inspired by neuroscience principles: as a research paper in the journal Neuron argued, what if “the goal of memory is not the transmission of information through time, per se. Rather, the goal of memory is to optimize decision-making. As such, transience is as important as persistence in mnemonic systems.” (Richards & Frankland 2017)

Meanwhile, much data lies gathering dust: 41% of files managed by big businesses had not been touched in the last three years, a 2016 study by data management company Veritas found. 12% of data was categorised as ‘ancient’, untouched within the last seven years.

Is the archive where information goes to live forever, or where data goes to die?

Concluding motes

We need to get better at working with grey and fuzzy data, because that’s its inherent state — I know this essay has taken a while to get to this point, but that is it.

To recognise the extent to which claims of perfect digital ‘legibility’ — from digital dust motes to big data & machine learning— are in fact fantasy.

To start to enquire about the functions of those fantasies — which I suspect mostly boil down to a fear of death. (Looking for those taking the opposite digital-über-alles perspective we find the transhumanists, who wish to archive and upload themselves into becoming immortal beings of pure light.)

To try and understand the grey areas in the digital better: the human and material imperfections inherently encoded in the machine.

To recognise the violence of the slippage from probabilistic predictive analyses into black & white decisions about freedom or imprisonment, jobs, financial lending and more —and work against them. To understand and change the violence in the ways that complicating, correlating factors are either ignored or actively ‘black boxed’ and concealed by for-profit technology companies.

And recognising that, as Bruce Sterling said, computation was from the very start “built to rot”. Decay is not only a property of obsolete media formats — your CDs and minidiscs — it’s a property of all media, because information is necessarkily material. Microcomputing ‘neural dust’ motes get eaten by the bodily fluids they sit in; quantum computing is still reducible to photons, electrons and very small semiconductor particles. None of these are outside the world; none of these are beyond time.

Using dust as a metaphor, a tool to think with helps us remember this — to think better and more accurately about the complexities and grey areas in things sold to us as black and white.

Dust itself both destroys and remembers everything, both at the same time.

Everything decays eventually — and this process of decay itself accelerates the decrepitude of other materials. In the library or museum or historic house, dust is the enemy of conservators for the way that it accrues on surfaces, on old parchments and fine fabrics, first hiding and dulling them before going on to create physical damage and chemical alteration. Information is lost beneath a layer of grey, uniform fuzz.

But that same dust is itself an archive of information, carrying with it its entire history and origin story written within its minute material and chemical composition. Dust on a painting — radioactive isotopes, trapped fibres — can reveal it as a forgery. The rise and fall of the Roman Empire is recorded through dust layers trapped deep in the Greenland ice sheet. On a rooftop, grains of cosmic dust from the beginning of time.

Thanks very much for reading — with special thanks to Damien Patrick Williams for feedback, and Tobias Revell for the cover image.

I am a researcher and writer based in London, working on media, environment and technology. Contact me on Twitter (@hautepop) or email:

For more on dust, check out my programme for BBC Radio 4 Four Thought, ‘A Speck of Dust’, and the aforementioned ‘Disturbances’ newsletter: previous episodes / subscribe.

For more about the ambiguities of media technology, I’ve written about ‘Post-Authenticity’ and making friends with bots, both here on Medium.



Jay Owens

I'm interested in complex systems: media, environment, technology. Freelance researcher & writer in London. @hautepop on Twitter