Rethinking Blockchains

John Markoff
10 min readFeb 7, 2024

A Curious History

Introduction

Once long ago I briefly gained a bit of notoriety for writing “My other blog is the New York Times, have you read it?” It was my way of gently teasing Dave Winer. At the time the new digital media crowd seemed to lack a sense of humor about the whole thing — but who would have thought that it would work out this way? I mean seriously, today the New York Times has more than 9 million combined digital and print subscribers? In any case Dave (who once upon a time in another universe had an office across the hall from me when I was at Byte) moved to New York a long time ago, and I’m still here in Silicon Valley, where I grew up.

Everything changes and I now find myself with some things to say that go beyond the bounds of the Times — where I write very occasionally these days — and which aren’t appropriate for book-form either. I’ve mulled the question of where for a couple of months and in the end I decided to stay away from Substack because of the Nazi controversy (I erased my Twitter account for the same reason). I was particularly inspired this morning to see Lloyd Kahn, one of the original editors of the Whole Earth Catalog, reinvent himself as a writer and a traveler after finally moving on from Shelter Publications.

So let’s see where it goes. I have no idea how frequently I will show up here. This first one stems from a promise I made to Stuart Haber and Scott Stornetta to recount my minor involvement in the pre-history of the blockchain.

In 2008 David Farber, a revered computer scientist and one of the pioneers of computer networking, published a pointer to Satoshi Nakamoto’s Bitcoin paper on his widely read Interesting People mailing list. Reaching more than 25,000 largely technical readers, Farber would regularly sprinkle his emails with alerts about tantalizing technical advances and warnings about potential incursions across the boundaries of civil liberties on the electronic frontier.

I put the Bitcoin paper on my list of interesting things to write about — but somehow never got around to it. Sixteen years earlier I had, however, written a New York Times article describing what was arguably the first blockchain, a software technology that is today conceived as a distributed digital ledger that allows for trust-less interactions between parties without needing a central authority. It is a technology that has become the foundation for digital money. It has also captured a cult-like following among true believers who claim it is a path to technology-based decentralization and digital democracy. More than three decades after my original Times article I am skeptical that the blockchain will prove to offer any more economic or technical value than a conventional database program. I also think — if it worked — it would undermine one of humanity’s most prized qualities — community.

In 2001 Steven Levy had written Crypto, his account of the Cypherpunk movement of digital privacy activists, and most technology journalists already knew people like David Chaum, an early prophet of digital money, and Whitfield Diffie and Phil Zimmerman, respectively the co-inventor of public key cryptography and the author of a program intended to spread that technology to the masses.

Maybe I never got around to writing because by 2008 the allure of Cyberpunk science fiction had already started to wear thin on me. There had been a time when Vernor Vinge’s True Names, a science fiction novella that imagined a world with infinite bandwidth and processing power, clearly felt like art foretelling reality. Increasingly that romance had dimmed.

By 2008 I had been writing about computer security and privacy for almost 30 years and I was growing weary of the subject. The first RSA Security Conference had been held in San Francisco in 1991 and in the ensuing two decades I had come to realize that while the computer security industry grew by leaps and bounds, actual security for PC users was getting steadily worse — and this was largely before the advent of the smartphone. Each year the exhibition grew larger while simultaneously, as computer networks extended computing to more of the world, crime and harassment grew. Cryptographic technologies had been heralded as the path to privacy and security in the information age. But it had become apparent that the technologies were not a match for human foibles. People remained the weakest link.

By 2011 I would give up. I decided if I had to write one more article about some testosterone-poisoned teenager with an attitude, I was going to have an aneurysm. (I moved to the Science section of the New York Times and gave the cyber-security beat to Nicole Perlroth.) In 2008 I had no clear sense that Bitcoin would ultimately engender the wave of digital tulip-mania that gripped the world a decade later.

Haber recalls that I called to ask him what he thought of the paper and he replied that it seemed technically valid. He remembers that I said something like, “well, Bitcoin is only being used to pay for drugs right now, it seems, maybe I’ll wait awhile before I write an article about it.”

Satoshi’s article revealed an intriguing lineage dating back to both the early work of Ralph Merkle, who in an effort to design better digital signatures, had developed what became known as Merkle trees in 1988, as well as the digital time-stamping work done by Bellcore computer scientists Stuart Haber and Scott Stornetta in 1991. The Haber and Stornetta paper had not used the term “blockchain,” referring instead to a “chain of time-stamps.” Indeed, neither did the Nakamoto paper, referring only to a “chain of blocks.” The Bitcoin paper however held out the possibility of “trust-less” interactions, both financial and otherwise.

It had been just three years since Charlie Stross had written the gem of a science fiction novel, Accelerando, and it seemed that everywhere I turned life was imitating his art. He had nailed the weird collision of biotechnology, computing, distributed crypto and even foreshadowed the Valley’s effective altruism ideology/religion.

I had begun writing about digital privacy and surveillance while working with reporters at the Center for Investigative Reporting and at the Pacific News Service, an alternative news service based in San Francisco. While doing that reporting I met Whitfield Diffie, who had invented public key cryptography with Martin Hellman in 1976, in 1980 or 1981. He was already a digital privacy activist and as I recall we met at the Oakland offices of the North American Congress on Latin America to talk about a new generation of Orwellian surveillance technologies emerging from Silicon Valley. Like many things at the time the fiction was still far ahead of the reality of digital surveillance — and also at the time it seemed like digital encryption would stack the deck in favor of the little guy. How naive we were!

It would be more than a decade later that I became intrigued by the work of the two Bellcore scientists. In 1991 Haber and Stornetta published a paper titled “How to Time-Stamp a Digital Document.” The hook for me was their assertion that in a digital world the paper scientific notebook would need to be reinvented. How would you prove that a discovery or an invention had taken place at a particular time, they wondered? In some ways it was a story similar to what had led Diffie to think about the technology that became public key encryption. He was working for John McCarthy, the computer scientist who had coined the phrase artificial intelligence, in the early 1970s. McCarthy had traveled to France and given a lecture on the future of home computing — which at the time was in McCarthy’s mind a terminal-based vision. McCarthy had been one of the inventors of time-shared computing and he saw a future in which every home would have a terminal. One of the obvious applications was electronic commerce, but that still left open the question of what would serve as the digital equivalent of the hand-written signature on a paper check.

At the time I was writing frequently about the societal impact of digital technologies. I had grown up in Palo Alto before it became part of Silicon Valley and even briefly played in the Hewlett household as a kid. I left the area for college and graduate school in 1967 and then returned to the Mid-Peninsula a decade later to discover the Valley was in full bloom. Soon after I came back I stumbled upon Micro Millennium by British journalist Christopher Evans. He outlined a clear argument about how the microprocessor was about to transform the world. Ok, I thought, that might be a good reporting beat.

As it turned out, that was an understatement. The microprocessor had made the personal computer possible and after that it proceeded to recast virtually every human tool. My favorite example is a story told to me by computer scientist Danny Hillis, an artificial intelligence researcher who had been one of the inventors of parallel computing. At one point in the 1970s or 1980s Hillis gave a speech at a conference held at the New York City Hilton. In his speech he made a claim about the proliferation of microprocessors. During the question and answer period, someone stood up and challenged his assertion, stating: “That’s crazy, if that were true there would have to be a microprocessor chip in every door in this hotel room!” A decade later Hillis returned and discovered indeed, that was the case. There was a chip in every hotel bedroom door.

My article on Haber & Stornetta’s digital time-stamping idea ran in the Business Section of the Times in January of 1992. “Experimenting with an Unbreakable Electronic Cipher” described their system, beginning with the prediction that in the next phase of the Information Age, “when business and scientific documents, photographic images and personal correspondence are all stored in computers as strings of 1’s and 0’s, digital forgery may become a significant threat.”

Underestimation aside, the article went on to describe a system for proving the date on which an electronic document had been created based on a unique number printed unobtrusively each week in the print edition of the New York Times. The researchers told me that the inspiration for their system had been a controversial fraud investigation involving Thereza Imanishi-Kari, a Massachusetts Institute of Technology biologist who was accused of altering research findings.

Before my article could appear, however, I was briefly involved in one of those awkward collisions between the looming emergence of cyberspace and the largely oblivious analog world.

The Bellcore system was based on a chain of interrelated numbers that could not be altered without revealing the attempted forgery. In building their system Haber and Stornetta were faced with the challenge of where to place a verifiable number in a public place that would be impossible to fiddle with. With a colleague Dave Bayer, they struck upon the idea of publishing a single string of numbers and letters each Sunday in an advertisement in the Public and Commercial Notices section of the Times. The published sequence — it is still being published more than three decades later — was a hexadecimal number that encompasses hashes issued that week. Since these numbers are made public at a known time and were widely available, it was assumed that they could not be tampered with.

During October of 1991, the process worked fine for about two weeks. A long string of cryptic numbers and letters appeared in the “Notices & Lost and Found” section on Sunday.

Then the Times balked.

“How do we know you aren’t bookies, or even worse, spies?” a skeptical classified ad salesperson asked them. Stymied, they turned to me. At this point my article hadn’t yet appeared in the paper, although I had driven to Morristown, NJ to interview the two researchers.

I didn’t know anyone on the business side of the paper, but I did know the publisher. So I called Arthur Sulzberger, Jr. and attempted to explain that this was a new digital service that wanted to use the paper of record as a way of verifying the provenance of a document composed of ethereal 1s and 0s. As I recall, Sulzberger said something like, “Wait a minute, let me get this straight. Are they paying for these ads?” When I responded that yes they were, he laughed and said he would straighten the matter out.

Which he did, and since then more than 1600 ads have appeared in the Times, making the Bellcore blockchain, which would become a service of a commercial spinoff the two researchers founded, the oldest continuous blockchain in existence today.

I recently spoke with Haber and Stornetta and they are cautious about their invention, lacking the unbridled enthusiasm of the true believers. “You see a technology and you think it’s going to have a big impact,” Stornetta told me, “but you never quite get the details right about where and how to point ahead.” They have not given up hope, however, that their invention might ultimately be useful for applications beyond Ponzi schemes, money laundering and selling stuff on the dark web. In November, in an effort to push the fragmented crypto world toward a single blockchain standard they began creating a series of non-fungible tokens (NFTs).

I remain skeptical about the digital nirvana of trust-less interaction. There is no shortage of authoritative skeptics. Read David Rosenthal’s blog for a good account of both technical and economic critiques or Molly White’s tracking website “web3isgoingjustgreat.com,” which tallies a running list of frauds, complete with a convenient ticker summing the amount of money lost — which now stands at more than $70 billion.

I have a different concern. What if it actually works? The internet has already increasingly atomized the world. The rise of digital libertarianism has celebrated the notion of a world of socially, economically and politically isolated individuals.

This is the ultimate dystopia. Or maybe it’s the appearance of the world forecast in StarTrek by the arrival of the Borg. With every new weird turn of the Internet’s collision with human culture it occurs to me that resistance may in fact be futile. Maybe it’s actually not too late. Maybe there is a way for us to control our tools rather than the reverse.

I hope so.

Television has already given us The Lonely Crowd. Further eroding the basis of trust — among the best of all human qualities — is something we dispense with it at our peril.

--

--

John Markoff

Former New York Times technology/science reporter. Current Stewart Brand biographer.