The Explosion of New Architectures Is Fundamentally Changing Computing

Killing Von Neumann

Thomas Smith
Feb 10, 2020 · 11 min read
Image for post
Image for post
Photo by Michael Dziedzic on Unsplash

Computers have changed very little since 1945.

I know, at first, that sounds insane. Clearly, computers as physical objects have changed dramatically. Back in the 1940s, a “supercomputer” was used to fill an entire room and was good for performing trigonometric functions and not much else.

Today, I wear a vastly more powerful computer (my Fitbit smartwatch) into the shower and while I sleep. And I carry another one in my pocket a distressing amount of the time — my Galaxy S10 is several orders of magnitude more powerful than the best computers of the 1940s.

But at their core, both the OG mainframe computers that filled a room, my uber-powerful coding machine at home, the tiny computer on my wrist, and the giant cloud servers that tie them all together are almost exactly the same.

Basically all computers of the last 75 years have run on almost the exact same architecture, described in a landmark 1945 paper by a man named John Von Neumann. Sure, they’ve gotten vastly more powerful and capable. And smart coders have done all kinds of new things with their software.

But at their core, today’s computers are just scaled-up versions of the original computer that Von Neumann described more than half a century ago. Some simple computers (like the one in your coffee maker or calculator) are a slight variation on Von Neumann’s design, but they’re still substantively the same.

In the last decade, though, all that has begun to change. We’ve seen an explosion of new computing architectures, which will continue and accelerate. These will finally move the world beyond Von Neumann, and fundamentally change computing as we know it.

But before we get there, let’s take a look back at the great man himself.

John Von Neumann was an American mathematician of the early 20th century, born in 1903. He started his life as a child prodigy in mathematics, and after completing his education, was lured to America from Hungary by Princeton University (yay, immigration!).

For the rest of his career, he pioneered mathematic, scientific and political theories almost too numerous to count — and in many cases too technical to understand. Among these are a slew of specific theories around various number systems, as well as the entire field of Game Theory, much of Set Theory, and the political doctrine of Mutually Assured Destruction.

Oh, and again, he founded modern computing as we know it. This particular feat was achieved in a shockingly dry-sounding paper titled the First Draft of a Report on the EDVAC, ca 1945.

In it, Von Neumann and his colleagues described a computing architecture with several basic components — a processing unit, volatile memory, and a set of stored instructions that the computer would follow serially. Everything would be in binary. It drew on the work of Alan Turing, whose Turing Machine provided some of the theoretical backings behind Von Neumann’s practical architecture.

Crucially, the data and instructions in a Von Neumann machine would be stored in the same way, allowing instructions for the computer to feed into further instructions recursively.

If this sounds a lot like RAM, CPUs, hard drives and high-level programming languages, that’s because they are. Basically every computer you’ve ever used has largely relied on the Von Neumann architecture, from your work PC to your iPhone. The one competing system — the Harvard architecture — differs in small ways, but is fundamentally similar to Von Neumann’s design.

Why has Von Neumann’s architecture proven so successful? For one thing, it’s relatively easy to implement in hardware. Binary numbers, stateful memory, and more are all readily implemented in silicon transistors, magnetic drives, and the other paraphernalia of the modern computing world.

But another big reason is that Von Neumann machines are deterministic and introspectable. You can fully describe them mathematically, and understand every step in their computing process. And if you put in one set of inputs, you can rely on them to always generate the same output.

So what are the drawbacks of the Von Neumann architecture?

There are some technical drawbacks that are very specific and mathematical. They mainly have to do with limitations in processing speed when working on large amounts of data pulled from memory. But these can be worked around with good hardware design, to a degree.

The biggest challenge with Von Neumann machines, though, is that they’re very difficult to code. Instructions for the computer have to be translated into a language (many languages, actually) with math and formal logic at its core.

This has led to the growth of an entire discipline — computer programming — that specializes in taking real-world problems and “explaining” them to Von Neumann machines so they can work their magic. When you’re writing a software program, you’re basically taking some algorithm and reducing it to the formal instructions that a Von Neumann machine can follow.

The challenge is that not all problems are so easy to reduce. And likewise, not all people have the right combination of a computational mindset and a deep love of logical, structured processes to do the translating. This leaves whole domains of problems unsolved and shuts whole categories of thinkers out of the field of computer science.

After 75 years of nearly unchallenged authority, the world is finally starting to move beyond the Von Neumann architecture.

Around 2013, a combination of factors converged in what’s become known as the Deep Learning Revolution. Seemingly all at once, Deep Learning systems exploded onto the scene, rapidly transforming fields ranging from audio transcription to optical character recognition to natural language processing. Deep Learning even created whole new industry segments--like self-driving cars and smart speakers--essentially from nothing.

Deep Learning relies primarily on neural networks. These complex systems mirror the workings of the human brain. They take in rich inputs, and through a series of hundreds of millions of interconnections, return output that can be remarkably useful.

If there’s a pattern anywhere in your data, a well-designed neural network will find it.

And here’s the kicker--you don’t even need to tell it what you’re looking for. Using a Deep Learning system is less like programming a computer, and more like teaching a child.

You show the system a variety of inputs and point out the relevant parts. Just as you might show a child a variety of trucks and tell them what type of truck they’re seeing, you give a Deep Learning system a set of examples (visual or otherwise) and explain what they represent. You also give counterexamples (“This photo of an emu is not a truck"), and feedback on the system’s progress.

Over time, the system learns the pattern and can identify trucks (or cancer cells, or molecules) all on its own. You don’t need to code in a bunch of rules for what makes a truck a truck--like your four-year-old, it learns the relevant pattern just from reviewing lots of input and receiving continuous feedback about its choices.

What deep learning systems do is clearly computation. They’re taking in data, and returning useful output--often much more useful than traditional rules-based, coded systems for whole domains of problems.

But their activities don’t look much like computation, with its usual mix of formal logic, code, and rules-based evaluation of conditions.

They’re also in many cases totally opaque--unlike with traditional computers, the designers themselves may have no idea how their deep learning system actually solves a problem. In fact, it might be mathematically impossible to ever know.

What are deep learning systems, then? They’re an entirely new architecture.

Like a Von Neumann machine, they solve problems. But unlike a Von Neumann machine, they don’t do this with memory, data storage, and formal, mathematical instructions. And this can make them strange beasts indeed.

In many ways, the contrast between Von Neumann machines and Deep Learning systems mirrors a debate that has been at the core of Cognitive Science for decades: the symbolists versus the connectionists.

At their core, symbolism and connectionism are two competing models for how thinking works. The symbolist model looks at the thought as a series of formal, computational steps performed on logical symbols. It’s all very clean and orderly, and very mathematical. In fact, it looks a lot like the workings of a Von Neumann machine. Symbolism’s mecca is MIT, and it’s associated with East Coast institutions.

Connectionism, on the other hand, holds that thought is an emergent property of complex webs of connected nodes--be those neurons or something else (like the virtual nodes in a Deep Learning system). Thought flows not from formal logic, but from the millions of interactions and connections within a system.

The dominant metaphor here is a cloud, which is itself an emergent property of the interactions of billions of tiny water droplets that all affect each other. Connectionism’s locus is Stanford and Berkeley, and it’s associated with the West Coast.

My own Alma Mater, Johns Hopkins, falls somewhere in the middle. One of my advisers, Paul Smolensky, won the Rumelhart Prize for a theory which essentially unifies connectionism and symbolism, by proposing that symbolist processes run on a connectionist virtual machine.

Given the academic underpinnings, you’d think that the debate between symbolists and connectionists would take the form of a friendly dialog between esteemed colleagues.

And you’d be wrong. It’s more like a bloody bar fight.

In 1990, rival cognitive scientist Jerry Fodor published a paper about Smolensky’s theories. The title was “Why Smolensky’s solution doesn’t work”. Nearly a decade later, he published a follow-on paper.

Its title? “Why Smolensky’s solution still doesn’t work.”

Offline, hostilities between the two camps get even more heated. In an infamous lecture in 1988, Fodor sought to debunk the idea “that I think connectionism is the lousiest new idea in cognitive science.”

His reasoning? Connectionism was actually “two lousy ideas, one about mental processes and one about learning”. And it wasn’t new.

He went on to call it “primitive”, and offer the Gandalfian prophecy that “it will pass”.

As it turns out, connectionists seem to be having the last laugh.

To their credit, computer scientists are much more pragmatic than academics. They tend to choose the solution that most efficiently solves a problem. And in many cases — especially in the last 7 or so years — that has been connectionist-inspired Deep Learning systems.

Driven forward by their practical power and relative ease of implementation, Deep Learning systems built on connectionist architectures are poised to take over the world — or at least massively impact multiple industries.

IBM’s Deep Learning driven Watson system famously bested a human champion in Jeopardy in 2011. It now solves practical problems in manufacturing, sports, and photography, among many other fields. Deep Learning is also changing fields including healthcare, the law, voice assistants, robotics, and scores more.

All of this change is subtly, inexorably — and for the first time in over 75 years — pushing the world away from the Von Neumann architecture.

So far, most Deep Learning systems have been implemented and simulated in software, on top of computing systems that still run on Von Neumann’s principles. But that’s starting to change, too.

In late 2019, Intel launched its first-ever computer chips designed specifically to run neural networks, one of the basic components behind Deep Learning systems. The dedicated chips aren’t traditional computers. They’re purpose-built devices designed to perform computation but under a totally new paradigm.

Moving beyond a Von Neumann setup, they implement connectionist-friendly Deep Learning systems directly in silicon. Intel expects them to generate billions in revenue. They’re already reportedly being deployed within Facebook’s data centres.

And in the future, they’ll likely bring Deep Learning to a whole new range of products. Simcam AI, for example, is wrapping a home security camera around Intel’s chip. This allows it to perform complex Artificial Intelligence processes like facial recognition and intruder detection right on the device itself, rather than relying on an off-site cloud server.

Simcam sent me a device to test, and while they’re still working out the bugs, the idea of on-device AI has massive potential, both for the capabilities of our devices and for the privacy of our data.

With a little help from the connectionists and companies like Intel, Deep Learning is finally pushing the world beyond the Von Neumann architecture, and into new models and paradigms of what computing can look like.

But Deep Learning and neural networks are far from the only new architectures. In 2019, Google claimed that it had achieved the holy grail of modern computing — creating a quantum computer that had achieved quantum supremacy.

Quantum computing dispenses not only with the basic processes and hardware of the Von Neumann machine but with the underlying assumptions and principles of its mathematics — and our own conception of reality.

In a quantum computer, numbers don’t have to exist as a binary 1 or 0, but can actually be both at the same time — a concept known as quantum superposition. If that sounds hard to fathom, it’s because our brains are fundamentally not set up to understand quantum processes.

It’s like if a day could be both Wednesday and Thursday at the same time. Or if you could be both alive and dead at the same time — like the cat in a famous thought experiment which introduced the quantum theory to the world.

Quantum computing introduces a whole net set of beyond-Von-Neumann architectures. Like Deep Learning, it also has the potential to create massive disruption and change across the field of computing.

Quantum computers make whole domains of problems — many of which are completely intractable for classical computing — much easier to solve. The most significant application is likely in cryptography.

Much of the technology that keeps your online activity secure is premised on the idea that certain kinds of math problems are very hard to solve. Except for quantum computers, these problems aren’t very hard at all. That means much of the data we currently believe is secure could ultimately be accessed by a company or government employing a quantum computer.

Beyond cybersecurity, quantum computers have applications in protein folding, business process optimization, drug development, and artificial intelligence.

Like Deep Learning, they have the potential to cause massive leaps of understanding in these fields, not just the incremental improvements that come from adding more power to the same old Von Neumann machines.

Some of Google’s results are still disputed. But the industry is marching on. Amazon is already providing quantum computing as a service, and other big companies will almost certainly follow suit.

Ironically, as the world moves away from the Von Neumann architecture and towards Deep Learning and ultimately quantum computing, it brings us right back to…Von Neumann.

Guess who published the landmark 1936 paper than introduced the mechanics of quantum computing, and the logical systems on which a quantum computer would run?

That’s right — the great man himself.

Von Neumann originally introduced quantum logic in a 1932 book and developed it more systematically in a 1936 paper with mathematician Garrett Birkhoff. Their ideas largely sat unused for decades, waiting for technology to advance to the point where they could actually be applied.

Today, that point is finally being reached. Quantum computers — and new architectures like Deep Learning, which will ultimately draw on the power of quantum mechanics — are radically remarking computing, and moving the field ahead in fundamental ways.

For the last 75+ years, Von Neumann’s ideas have provided the underpinning for the entire field of computing. Computing, in turn, has radically remade our world-impacting every aspect of business, government, finance, medicine, culture, and daily life.

The fact that Von Neumann’s ideas have driven these radical changes for almost a century — and now, through quantum computing, stand poised to lead computing for another century or more — is a remarkable testament to the great man’s brilliance and impact on the world.

Though little-known outside computer science, Von Neumann is one of the seminal figures of the 20th century. As his ideas continue to be applied, he may yet prove to be a seminal figure of the 21st as well.

The Startup

Medium's largest active publication, followed by +755K people. Follow to join our community.

Thomas Smith

Written by

Co-Founder & CEO of Gado Images. I write, speak and consult about tech, privacy, AI and photography. tom@gadoimages.com

The Startup

Medium's largest active publication, followed by +755K people. Follow to join our community.

Thomas Smith

Written by

Co-Founder & CEO of Gado Images. I write, speak and consult about tech, privacy, AI and photography. tom@gadoimages.com

The Startup

Medium's largest active publication, followed by +755K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store