The Bandwidth++ Scenario

Trent McConaghy
4 min readAug 18, 2016

--

The Bandwidth++ scenario is, in my view, the most pragmatic response to the threat of AIs controlling all of humanity’s resources. It puts us on equal competitive footing with silicon, and it can happen incrementally using today’s market forces.

In the scenario, bandwidth between our selves and computation gets to the level of bandwidth between our selves. And then it keeps going. Eventually, we can unplug.

Near-Term Market Drivers

Imagine that we get the form factor right for Google glass. And it becomes ubiquitous.

But of course inputs will be a problem. Hand gestures and typing could be too slow or too annoying, and voice could be too slow or too loud. Imagine if you could type by thinking about it, using EEG-based brain-computer interfaces (BCIs) or eye tracking (not “thinking” but it feels like it is). BCI-based typing goes back to at least the 80s, and has been critical for people with locked-in syndrome. It continues to get better, for example 2011 research at Tsinghua U achieved 10 words per minute. Combine this with eye tracking and word predictions, and we could quickly get BCI-based typing that’s competitive with traditional hand typing. BCI can also be used to move mouse cursors, and other input modalities.

We can call this sming — silent messaging — as done in Vernor Vinge’s 2006 novel Rainbows End.

Medium-Term Market Drivers

This input will give other benefits beyond typing, benefits that feel almost magical. These include:

  • Perfect memory. If your glasses are recording everything you see & hear all day long, then you can retrieve that video simply by doing a BCI-text-based search query of your recent memories.
  • Communicating in pictures and videos. We all appreciate that “a picture is worth a thousand words” but that hasn’t been helpful in conversation with others. But consider that you could simply share your recent pictures and videos to others, by thinking about it. It’s a huge increase of the bandwidth, compared to mere words.

And more, as illustrated below. (These are from a talk I gave to a room full of neuroscientists in 2012.)

Magical-feeling new capabilities like this will drive BCI/eye-based inputs in glasses to become ubiquitous.

The Market Drivers Continue

Like the current smartphone market, there will be market pressure to make these devices continually better. We’ve come a long way from the first Blackberry, to the first iPhone, to now. There will be pressure to improve the accuracy, and to simplify the search.

One way will be to think of an image, and that will be used as an input query to your memories. This sounds like science fiction, but in fact there are already compelling results by Mary Lou Jepsen and others. When you think of an image, it lights up your visual cortex in a 2D grid near the back of your head. Sensors can pick up patterns on the visual cortex, then using AI techniques can use that to find the most similar images / videos in your recorded images & videos.

One of the keys to these devices getting continually better is to improve the quality of the BCI scanning. There will be market pressure to read the brain at higher frequencies, higher resolution, and more deeply. This is far more market pressure than the main current motivations to improve BCI technology — helping the sick.

As these devices get continually better, our bandwidth of communication between our brains and the computers will increase; and in turn the bandwidth of communication between each other. We will learn to send off computation jobs by thinking about it, and getting results. The non-brain computation will grow tremendously, alongside non-brain storage. As the communication bandwidth between the computers and our brains grows, it will get to par with our inner brain-brain communication. At this point, it will be difficult to distinguish what’s in the brain, and what’s not.

Endgame

Perhaps after ten years of this, the “brain” part will only be housing 1% or 0.1% of the computation and storage, and will be feeling pretty obsolete. Our “selves” could have been ported, slowly but surely, to the silicon side.

And one day, we just might disconnect the brain part. The transformation will be complete. We’ll be living in silicon.

And, we’ll be competitive with other silicon intelligence. If you can’t beat the machines, join’ em.

Appendix: Related Talks

  • “Reflections of a Recovering Bio-Narcissist on the Singularity”, riseof.ai, Aug 18, 2016, Berlin [slides][video]
  • “AI, Blockchain, and Humanity: The Next 10 Billion Years of Human Civilization”, Convoco 3.0 — AI and the Common Good, Berlin, Apr 1, 2017 [slides] [video]
  • T. McConaghy, “A Solarpunk future from Today’s AI, Blockchain, and Brain Technologies”, NASA Ecosystem Futures, webinar, Oct 18, 2023 [Slides — PDF][GSlides]
  • T. McConaghy, “A Solarpunk future from Today’s AI, Blockchain, and Brain Technologies”, Foresight Institute Vision Weekend, France, Nov 18, 2023 [Video][Slides — PDF][GSlides]

--

--