Web3 and why there’s no turning back

We’re not at a fork in the road, we’re at a branch in the tree

James Plunkett
13 min readFeb 14, 2022

Path dependency is one of those terms I love and hate at the same time. I love the concept but I hate the phrase itself, which I think downplays the idea’s full significance.

In this post I thought it might be interesting to unpack why I feel this way and why I think it matters, particularly now, with the rise of Web3.

In short, I think it might be useful to replace the idea of a path with an image of a warped tree. And I think this shift of framing could help to clarify the implications of path dependency for the way we govern a digital society and Web3.

A photograph of a forking path through a forest.

The problem with paths

Path dependency is an important idea in economics which, roughly speaking, means that once we go down a certain path it can be hard or impossible to change course, even if we later realise we went the wrong way.

Path dependency applies particularly well to big technology choices. But it can also apply to things like institutional setups and norms of behaviour.

Path dependency is explained by a few factors, which can mix together to form a kind of glue that sticks human behaviour:

  • We sometimes invest so much money/effort in a thing that we can’t bring ourselves to move away from it later, like the way we’ve built our physical environment around cars.
  • We sometimes get stuck because of network effects — situations where it’s hard to stop acting a certain way unless everyone else stops acting that way too. This applies well to communications mediums like email.
  • We can get stuck through force of habit. Like the way people used to commute five days a week before the pandemic, even though we hated it and it was entirely unnecessary. And arguably also the way we’re now working from home a bit too much.

But while the concept of path dependency is useful, the term itself, and the mental image it prompts, means we tend to under-appreciate the significance of the idea.

For one thing the phrase ‘path dependency’ conjures up an image of, well… a path. And what’s the key thing about a path? You can walk up it and back down it again. So the underlying metaphor just doesn’t really work.

But I also think the whole ‘fork in the road’ metaphor — the idea that we could have gone one way, but instead went another — understates how pervasive path dependency really is and how profound its implications are.

To unpack what I mean by this, let’s go back to my post from a couple of week’s ago, in which I summarised W. Brian Arthur’s book The Nature of Technology.

Progress through combination

In The Nature of Technology, Arthur argues that technological progress happens through combination, i.e. we build or discover new technologies mainly by combining existing technologies together.

One consequence of the combinatorial approach to technology is that technological progress is what we might call accretive. And this means that the chance and dependencies that are involved in technological progress are accretive too.

As Arthur explains, each new technology we discover or build doesn’t just do something useful in itself. It also acts as a potential component in other future technologies, so it defines what’s possible next. As Arthur writes, “technology builds itself out of itself”.

To visualise this point, Arthur describes the body of technology as a coral reef, in that we ‘build out’ technology from structures that are already there. As we build the reef, it grows in certain shapes and directions, and each new layer builds on top of ones that we built up before.

This image of ever-accumulating intricacy is useful partly because it shows (better than a path) precisely why there’s often no way back from the technology choices we make. Each new technology is really bedded in between prior and subsequent technologies, which means that even if we wanted to change course it’s too hard to unpick.

But there’s also a bigger point here, which helps us see why path dependency is such a big deal. To understand this, we need to look at the most famous example of a technological path dependency, the qwerty keyboard.

Qwertyness is everywhere

The story of the qwerty keyboard is well known, so I won’t labour it here. TL;DR we invented the qwerty layout as a way to slow down typing to stop typewriters from jamming. By the time we moved on from typewriters, the qwerty layout was so widespread that we couldn’t bring ourselves to retrain so many people or reinstall so much kit, so we’ve been stuck with a duff layout ever since.

The qwerty keyboard is a lovely example because it shows how daft path dependencies can be. We know that better keyboard layouts are available. And we know that it would save us loads of time and money if we switched. And yet although our society is capable of sending a telescope into space to photograph the origins of the universe we are literally incapable of switching to a new keyboard layout.

But here’s the problem with the qwerty keyboard as an explanation of path dependencies: it’s too tempting to think it’s special. When we hear the qwerty story we roll our eyes at how silly we’ve been to get stuck with ‘the wrong’ keyboard. And then we go back to our TV/car/city/daily routine/family living arrangement, comfortable in the knowledge that at least these things aren’t so silly.

That’s really the opposite of the insight we should draw from the whole idea of path dependency. Because the real takeaway from the qwerty keyboard story is that everything is qwerty. Or that qwertyness is everywhere we look.

What I mean is that at every step in our technological journey, chance plays a role, and these accidents accumulate significance, closing down some possible futures and opening up others. It’s almost as if every technology is made up of tiny little qwerty keyboard cells, so that qwertyness runs through the whole thing.

The same point applies not just to technologies as classically defined but also to institutions. Both formal ones like the organisations of the state and informal ones, like the practice of living in a nuclear family or the culture of economics as a discipline. These ‘technologies’ — or “systems with a purpose”, to use Arthur’s phrase — all build from what came before in an ever-accumulating pyramid of dependency.

There’s a moment in Arthur’s book where this point really hits home. It comes when he runs a computer simulation of his combinatorial theory of technology, setting up all the rules of combination but for a simplified world in which technology is limited to simple logic circuits.

Arthur sets up the simulation and hits go, and watches repeated runs. In some runs, the model society discovers technological building blocks that allow it to quickly invent an 8-bit ‘adding’ logic circuit, which, given that they started with nothing, is quite an impressive piece of kit.

In other runs of the model, however, it takes the society hundreds of generations longer to reach the same point. In other runs, the society never ‘invents’ an 8-bit adding circuit at all.

Where is the 8-bit circuit in those ‘failed’ worlds? In a sense it’s still out there — i.e. it’s still theoretically findable — but only in the strongest sense of the word theoretical. For the society in question it’s literally not findable. It’s not just down another path, and it’s not just a matter of time before they find it. It’s forever unreachable because of the choices they’ve already made.[1]

And this for me is the penny drops moment: we live in one of these worlds.

There are undoubtedly technologies ‘out there’ that we can’t now discover, and that are ‘better’ than the ones we have today. And again I’m using the word ‘technology’ here in its broadest sense, to include human systems like formal institutions and habitual ways of living.

Sometimes we can see what we’re missing, as with the qwerty keyboard, because other keyboard layouts are visible but frustratingly out of reach.

But this is just the visible tip of a vast iceberg of possible worlds. Most of the time, we don’t know what we’re missing, and often we can’t know, since the knowing would itself require technologies we don’t have.

The warped tree

I remember one afternoon, about 15 years ago, I went to do some work in the university library.

I hunted out a forgotten corner of the library, sat down and got out my books, and then I looked up to see I was sitting in a section called ‘imaginary history’.

I spent the next three hours pulling down book after book from the shelves, reading thought experiments about how history could have been different. What if Hitler had won WWII? What if Kennedy hadn’t been assassinated? What if Christianity hadn’t taken hold in the West?

It was a fun afternoon but it wasn’t exactly productive. After all, things hadn’t turned out these ways, so it all felt a bit moot. The books might as well have been fantasies about elves and wizards.

It’s tempting to think about path dependency this way — i.e. to think it’s a fun idea but without any real world bite. But this would miss the point. Because although I’ve been using the word ‘chance’ a lot throughout this post to describe technological discovery, that’s of course not really how technology works.

We don’t discover new technologies with the role of a dice. We direct our efforts based on incentives that are set by our institutional environment, and we have a lot of say over that institutional environment — it’s basically the subject of public policy. So the job of policy is in large part to influence the paths we take, even if we don’t tend to think of it that way.

At the moment, for example, we know that technological progress is directed mostly by price signals set through free exchange in the market, recalibrated by the way the state regulates markets, and offset slightly by the way the state directly funds or subsidises certain types of research and development. We also know that there are various forms of bias involved in the technologies we do and don’t discover, because the people who work at the forefront of technology are unrepresentative of society, in that they’re disproportionately straight, white, able-bodied, men.

So at the moment, as the body of technology moves forward, and as new technologies are discovered, it’s not chance that builds up over time, it’s the combined effect of these incentives and biases.

All of which takes us to an answer to the problem we started with. What might be a more helpful way to think about path dependency than the metaphor of a path or a fork in the road?

I think it might be helpful to visualise the body of technology as a warped tree, branching ever more finely as it grows toward an off-centre light.[1]

The light that draws the tree on, and that, in a sense, powers (or incentivises) its growth, isn’t straight up above the tree. i.e. it’s not an objective north star of human needs that makes the tree of technology grow unproblematically toward a better future for all.

The light is off to the side, which reflects the extent to which it doesn’t quite line up with human needs, but rather flows from something called the profit motive, as interpreted by a fairly small and unrepresentative elite.[2]

And of course new branches of the tree can only grow where there’s already a branch to grow from, so the whole process is recursive. And as the tree warps toward the light, whole new futures open up as freshly possible, or they close off as impossible, sometimes forever.

A fractal image of a warped tree, learning off to the side.
A distorted tree, with apologies for inept design abilities.

A thick branch

Let me finish by touching on the clickbait promise of my title. Let’s talk about Web3.

I find Web3 an explosively irritating phrase but, as so often with technology, it also hides some important substance beneath the layers of nonsense.

The substance at the heart of Web3 is that some quite important new infrastructural technologies are now coming to maturity, and these are the kinds of technologies that constitute thick branches in the tree of technology. i.e. they create lots of potential for new branches, so they do a lot to determine our future direction.

Some of these infrastructural technologies, like blockchains, are technologies as narrowly defined. Others are technologies in a softer or more human/institutional sense, like the habits and systems of thinking that are starting to form around the metaverse.

In both cases there’s lots of hype for sure. But it’s also clear that these technologies are onto something big. i.e. they have a really broad field of potential application, even if we don’t yet know which of these applications will be enduringly useful/profitable and which ones will turn out to be just people overpaying for pictures of apes.[3]

The point I’m making here is that, at a moment like this, path dependency really matters. And it matters especially if we think technology grows like the warped tree. i.e. if we conceive of technological progress as a process in which we irreversibly open up and close off certain possible futures forever, guided by the incentives provided by our current institutional settlement. And when we put it like that, we see that this isn’t just an academic through-experiment. It’s all quite urgent.

In essence, what we have here is a timely reminder that if we mess up the transition to Web3, we’ll be stuck with the outcome forever and we’ll probably never even know what we lost.

Policy implications

What lessons can we draw from all of this for public policy? For starters, it feels clear that our standard approach to regulation doesn’t work very well for the world of the warped tree.

The standard approach to economic regulation (and I know this is unfair and reductive, but this post is running way too long) is to see if something becomes a problem and then do something about it if it does. In the world of the warped tree, this is a bit like testing if a planet is safe to live on by flying there on a one-way rocket.

So what do we do instead? This is complicated territory, but let me just dangle three ideas I find interesting.

One, if we live in the world of the warped tree, we should probably work harder to keep technology open, permissive, and diverse, i.e. to keep a range of possible futures open. We could develop a suite of policies to do this. To give specific examples, we could take a much stronger approach to interoperability, like the one proposed here; or be bolder on open data; or take more permissive approaches to intellectual property as discussed here.

Two, we could consider putting diversity at the heart of economic and regulatory policy. In the world of the warped tree, diversity of both ideas and people is really important, particularly at the frontier of technological change. Since this is where we’re opening up and closing off possible futures, in irreversible ways, it’s vital to limit the extent of problematic bias in the way this plays out.

From a policy perspective, this could mean rethinking the way research is funded or subsidised; it might be useful, for example, for the state to fund ideas that the market thinks don’t have much potential, to keep possibilities alive. We could also do more to boost the diversity of people working in technology, science, and engineering, not just in the pursuit of equity but also because it’s good economics.

Three, at a more general level, we could envisage a whole strand of public policy that is specifically about how we exit from paths we regret. What are the means by which societies can identify and break out of unhappy equilibria? How do we spot qwertyness? And, because qwertyness is often unknown — i.e. once the tree looks a certain way, it’s hard to imagine how things could be different — we might want to think harder about how we identify alternate possibilities. This could bring together an interesting mix of disciplines, like behavioural science and social imagination.

So I guess the basic point is really just the one I keep banging on about. If we’re going to make a success of the digital age — or, more to the point, if we’re going to avoid really screwing it up — we need fresh ways of thinking.

If you want to read along as I unpack the implications of this, you can follow me for free on Medium here. Or to support this project for £3 a month (and get a free book), you can subscribe on Substack here.

This is post #5 in a year-long series exploring how we govern the future. Here are posts 1, 2, 3 and 4.

For the big story behind all of this, from Victorian sewers to digital dragons, you can buy my book End State (now freshly algo-optimised at just £11.99.)

Footnotes

  1. It might seem like there’s a counter-argument here based on fundamental science. Surely the society in question could still use theoretical science to discover the potential for these distant technologies, and use this as a kind or torch to guide its way towards them. As Arthur points out, however, science is itself dependent on technology for the machines that make it possible to run advanced experiments. So in these ‘failed’ worlds we can assume that science itself has been limited in its reach by a lack of technology.
  2. I wrote about this misalignment between profit and human needs in a long read here.
  3. If you’re a total Web3 sceptic and think the whole thing is hot air, I still think you’d have to accept that we’re living in an exponential age. i.e. that the tree is growing really fast right now, because the light that’s shining on the tree — the perception that there’s loads of profit to be made — is shining really bright. So a lot of the same implications follow regardless of whether or not we think Web3 marks a particularly decisive moment of technological history.

--

--