The Unfinished Revolution

Personal Computers and the Web were intended to make us all clearer thinkers and problem solvers. Despite remarkable progress, we’re still very far from done.

Philip Grabenhorst
26 min readMar 7, 2023
A pen-sketched outline of Eugene Delacroix’s “Liberty Leading the People”, where Liberty’s flag features an Apple Macintosh.
Credit: Apple Insider

“And why then do you think there is a last revolution? There is no last revolution, their number is infinite…. The ‘last one’ is a children’s story. Children are afraid of the infinite, and it is necessary that children should not be frightened, so that they may sleep through the night.” — We, by Yevgeny Zamyatin

We have a problem. It’s a problem of a path not taken, of forgotten convictions, and slumbering ambitions. I don’t want to sound like an alarmist, but with each passing year, it becomes more and more pronounced and more difficult to undo. It has to do with information — yours and mine. I’m not talking about information privacy, though that is a problem. I’m not even talking about how much of it we have to contend with on a daily basis, though that might be considered a prominent sub-problem. No, I’m talking about the Information Revolution itself.

As far as Revolutions go, Zamyatin’s We provides some useful context. In We, a revolution that may have once started out with ostensibly good motives morphed into an uninhabitable shadow of its intended glory, handicapping the citizens it claimed to help. When we meet the protagonist, it’s almost as if he’s called it quits. “Our Revolution was the last one,” he says to his foil and lover, “no other Revolutions may occur … everybody knows that.” When presented with the notion of a “final revolution”, though, she quickly retorts that “there is no last one, revolutions are infinite.”

We didn’t end so well for its protagonists. However, like the lead characters, we don’t have to sit idly by. Not all of us live under the kind of authoritarianism described in We, but we are all stakeholders in this larger project — the Information Revolution. We have the ability to continue the Revolution and keep things going in the right direction. However, to understand how to do that, we have to understand the goals of the revolution, as well as where we are and how we got here. That requires a little bit of history.

Systems, Cybernetics, and Human Beings

The history of computation and the modern computer is a long one. Instead of reproducing it in detail, I will seek to mention only those developments that apply directly to our current predicament. The modern personal computer is, largely, influenced by a set of work that was done in the San Fransisco area in the 1960s and 1970s. Common histories often include mythical references to the Xerox PARC research center and its influence on entrepreneurs in the personal computer space, such as Steve Jobs and Bill Gates. The work at PARC, in turn, was heavily influenced by a lesser-known lab, called the Augmentation Research Center (ARC). A subsidiary of SRI International (then, simply known as the Stanford Research Center), this lab produced a famous piece of work known as “The Mother of All Demos,” in 1968. This was a proof of concept demonstration wherein Douglas Englebart and several other researchers introduced technologies that we take for granted, today — the mouse and keyboard, graphical user interfaces, hypertext, word processors, video conferencing, and so much more. It’s rightfully famous, and its work set the course of consumer electronics for decades.

Things get even more interesting when you look at the research that inspired Englebart and his colleagues. The demonstration the world saw in 1968 was a product of roughly a decade of work. The goal of this work was stated in a 1962 report, published by Englebart and submitted to the United States Air Force Research Center in 1962. Entitled Augmenting Human Intellect: A Conceptual Framework, Englebart writes about many of the technologies they would later include in their “Mother of all Demos.” These technologies weren’t ends in themselves, however. Englebart and his fellow researchers were engaged in a profound research project. Their stated goal was to “[increase] the capability of man to approach complex problem situation[s] to gain comprehension to suit his particular needs and to derive solutions to problems.” In the report, computers are only mentioned as one potential, though promising, route through which this might be accomplished.

To understand Englebart’s report, one needs to understand the scientific landscape of the 1940s, 50s, and 60s. In this period, there was a popular trend known as “Systems Theory” or “Systems Sciences.” Essentially, it claimed to be a novel approach to the understanding of complex systems. It was contrasted with the predominant theme of science up to that point, referred to as “reductionism.” In the “reductionist” approach, it was said that people focused on the building blocks of things, more than on their relationships. Ludwig van Bertalanffy, sometimes credited as the originator of General Systems Theory (GST), published many papers on the topic and compiled them as a book in 1968. It promised to unify disparate scientific disciplines and create a language to talk about and solve some of humanity’s biggest problems. These developments overlapped with another trend known as Cybernetics. Slightly more specific than GST, the Cybernetics discipline of the time didn’t look anything like the “cybernetics” in the common vernacular. (No Borg to be found, here). Technically speaking, a cybernetic system is a goal-seeking system. This goal can be external, as in the case of a targeting system, or internal, such as the human body’s process of maintaining homeostasis. There was much overlap between the two fields. For instance, William Ashby’s 1956 book, Introduction to Cybernetics, is considered foundational in both Cybernetics and General Systems Theory.

When Englebart wrote about “increasing the capability of man … to derive solutions to problems,” he had both of these frameworks in mind. To Englebart, the goal was simple, though generalized: “solve the problem.” The Cybernetics approach brought a lot to the table, and it helped in answering questions like this one: “given our goal, what is the best (most efficient) way of reaching it?” The generalized Systems approach provided Englebart with a framework to describe the thing that solved the problem. Englebart abstracted this thing, allowing us to realize that it could take whatever form we needed, as long as it solved the problem. He called it an “H-LAM/T.” This mind-numbing acronym becomes even more dull when you spell it out: “a Human, with Language, Artifacts, and Methodologies, plus Training.” For people who don’t speak whatever language the ARC folks were speaking back in those days, this just means that we humans are one part of a larger system that works to solve a problem. Because these problem/solution pairs are often defined and guided by human intelligence, Englebart and his predecessors referred to this process as “Intelligence Augmentation”, as used in the title of the paper.

At this juncture, it is important to emphasize that the pieces of Englebart’s problem-solving systems could be anything. A human swinging a hammer with the goal of fastening a picture to the wall could just as easily be considered an “H-LAM/T.” In the nomenclature of his framework, you’ve got the human. You might have some language — such as a drawing of where you want the picture to hang. You definitely have an artifact — the hammer/nail tool pair. Lastly, you have the methodology. All of this requires at least a little bit of training. However, when we think of augmenting human intelligence, we might think of modern “knowledge workers.” Indeed, during the “Mother of All Demos”, Englebart specifically focuses on people who do “intellectual work”, realizing that this realm was ripe for enhancement. This doesn’t automatically have to mean computers, though. In Englebart’s paper, he reprints, word for word, an article written in 1945, by Vannevar Bush. This is another famous piece of computer-history lore, but it doesn’t involve a computer. It just happens to describe many of the things we use computers for, today — the representation of information using links and associations that mirror the human thought process. Bush envisioned it as a device (relying on microfilm, mind you) that would act as an extension of our memory, helping us to recall the stuff we consume, the concepts they represent, and the means by which we might apply them.

Fast forward a few years, past Bush and Englebart, and we can trace this crusade closer to our modern day. Tim Berners-Lee, often credited as responsible for inventing the modern web, mentions Bush and Englebart as inspirations. In his book, Weaving the Web, he describes the early days of the idea at CERN. Its origins? A humble personal project. Because Berners-Lee had come into CERN as an outside contractor, he needed to get up-to-speed quickly on many different projects. He built the first version of the Web’s architecture, named Enquire, as a way to do so. It used an associative medium, just like Bush and Englebart described. Throughout the rest of the book, Berners-Lee describes the evolution of a system that aspires to “connect everything, potentially, to anything” and satisfy Bush’s vision of giving everyone access to the “inherited knowledge of the ages.”

Chronologically, though, this is where things start to go sideways. Thus far, we’ve seen a trail of thought starting with the earliest computer pioneers. Their goal was to empower us, to extend our memories, to make us clearer thinkers, and, at the end of the day, better problem solvers — by whatever means necessary.

Humans, Being Humans …

A recurring subject in Tim Berners-Lee’s book is what is missing from the web. The last two chapters of the book are a wish list of everything TBL would still like to see added. Many are common-sense additions — things that we “sort of mostly” do now, with the web in its current state, but that could be improved. Others are more fundamental, more far-reaching, and can only happen with a great many changes to how we interact with the web. One of these changes isn’t so much an add-on, but a throwback. In the original version of the web and in early web browsers, the browser was used both to consume information in the form of hypertext documents and to create them. Yes, we can use certain services to add to the web today, such as Wix, WordPress, or any other site-building service, but this is fundamentally different from what TBL envisioned. The original version of the web would have had web pages behave more like word documents do now — if you own it, you can open it and change it in one pass. This might sound like a “nice feature”, but there’s more to it than that. The gist here is that the intention of the web was that we would both create and consume.

Tim Berners-Lee published Weaving the Web back in 1999. Our technology has changed for the better since then … hasn’t it?

For sure, there have been a lot of changes. Our personal technology has become more stable, faster, smaller, and all-around more pleasing to work with in the past 20+ years. There are fundamental changes that Weaving the Web misses out on, entirely. Google has largely succeeded at its goal of “organizing the world’s information”, and continues to improve our search results year after year. It is easier than ever to discover new information. The iPhone and the suite of mobile devices occupying our pockets have brought this information closer to us than ever. From this perspective, the Information Revolution has largely succeeded. An important distinction must be made at this juncture, though. The capacity to access data is not the same thing as solving problems. Furthermore, there is a difference between data and information. But if we’re not acquiring information and solving our problems … what are we doing?

According to one measure, we use 70% of this capability on TikTok, Snapchat, Youtube, and Facebook. In the last couple of years, public attention has been focused on the type of information these channels feed us, from politically polarizing insulation to toxic masculinity in British schools. Even when we’re not passively consuming information that we would normally avoid, our personal computers are killing our productivity. We spend too much time getting distracted by notifications and pieces of information that don’t contribute to the problems that we want to solve. Cal Newport, in his 2016 book Deep Work, argues that this culture of distraction has made focused problem-solving a rare skill. In response, we see a small but vocal minority moving back to devices with fewer features.

But hold on. The devices that we built to make us better problem solvers … are making us worse at solving problems. Let that sink in for a moment … WHAT GIVES? 🤬

The Holy Grail

The problem isn’t going away. Last week, Meta announced its hardware development plans for the next several years. The concepts will make any technologist giddy. Through a slew of iterative devices, the company hopes to move toward what Mark Zuckerberg modestly refers to as “the holy grail,” a set of glasses that seamlessly connect our virtual and physical lives. We won’t have to carry phones anymore, because the glasses will be powerful enough to function untethered, in contrast to many current headsets. Information and displays will be projected onto the scenery. Oh, and we’ll control it with neural interfaces. How does Meta plan to make any money off of all of this expensive, difficult-to-manufacture hardware? Alex Himmel, vice president of Meta’s VR division said that “We should be able to run a very good ads business.” That’s right, the holy grail will interrupt your reality to bring you … Samuel L. Jackson and your new Capital One Credit Card.

Last month, on February 7th, Microsoft’s Yusuf Mehdi took the stage and performed the coup d’état of the decade. Through its partnership with OpenAI, Microsoft is actively integrating natural language interfaces with many of its products, starting with Bing. The announcement and its associated demo have taken the IT world by storm, not least because it puts Bing — long the industry underdog — in a position to dethrone Google’s decades-old search dominance. What does it do? Well, it’s a conversational AI, trained on an internet’s worth of data. You can ask it questions just like you would a friend. What do we do with it? Well, if Mehdi’s demo is anything to go by, we’ll be using it for buying TVs…

These are revolutionary technologies. They fundamentally change the interfaces that we use to interact with computers. They will make it that much easier to access the wealth of knowledge across the web whenever we want. It’s almost like we’ve taken the convenience of the iPhone, wrapped it up in natural language, and planted an AR bow on top. But what will we use them for? More of the same. Both Google and Facebook rely on revenue from advertisements for the overwhelming majority of their profits. This provides financial motivation for both companies to expose you to other people’s information, and as much of it as possible. Our distraction has been incentivized. Tim Wu, of Columbia University, refers to these sorts of business models as “attention merchants,” and it’s remarkable just how many of them there are.

But ads aren’t the only problem, here. Apple and Microsoft are companies that have made their money through other means — hardware sales, recurring service sales, and licenses. Even these, more traditional, models present a problem. Whether it be the latest version of Windows or the newest iPhone, making sales means differentiating yourself from the competitors. Sometimes this means adding a killer feature. For Apple, at least, this has meant providing a world of potential killer features through the App Store. Unfortunately, this panoply of potential workflows — which Apple is incentivized to promote — makes it more difficult for us to focus on the workflows we really need. In their book Simple, Alan Siegel and Irene Etzkorn lay into this “everything and the kitchen sink” situation, repeatedly. This universe of endless choices produces a scenario where we end up “fusing the noble goal of self-determination with the difficult realities of decision making,” such that we “end up with a situation where people are overwhelmed and feel inadequate if they admit confusion.” A wide variety of choices is good for sales, but it’s bad for problem-solving.

Endless options. Endless streams of new information to discover. Endless assortments of new products some algorithm somewhere thinks we might want to buy. Will it always be like this?

Time for a Change

Public opinion is shifting. I don’t know about you, but my phone exists in a perpetual silent mode. Most of my peers do the same, and one of them actually bought the new Palm phone. Two years ago, with iOS 15, Apple introduced Focus, a setting on their devices that preempts every notification that you don’t explicitly allow. People are tuning into the idea that their attention spans and capacity for focus have been negatively impacted by their devices. But is this far enough? Is it enough to ameliorate the negative impacts of technology? Shouldn’t we also be simultaneously working to take full advantage of its promise?

It would be easy to conclude that this sort of work ended with Vannevar Bush, Douglas Englebart, and the like. It’s convenient to rely on platitudes, such as “they just don’t make ’em like they used to”, and conclude that today’s problem solvers will forever be a step behind their far-sighted predecessors. I don’t think this is the case, and not least because there are still a lot of smart, dedicated people working on the problem. Tim Berners-Lee is still around and working to weave the web we all know and love. Howard Rheingold traced much of this same history in his 1985 book, Tools for Thought. More recently, Michael Nielson and Andy Matuschak published a fantastic essay on the subject in 2019 — complete with an experimental demo for enhancing cognition in a particularly gnarly topic, quantum physics. Personally, I am a great fan of Linus Lee’s work on personalized databases, search engines, and the Mental GPS. There are still a lot of people working very hard to make us better problem solvers, using whatever tools they have at their disposal. Not only do these hard-working pioneers have the investments of their predecessors to build on, but we’re just better equipped now than we were before. In the 1960s, the theory and practice of software engineering were in their infancy. We didn’t have the empirical data that decades of UX research have gifted us. On top of that, we just know more about the most important part of Englebart’s “H-LAM/T” system, the human mind. We can and should continue to build systems that make us better problem solvers. What might these systems look like?

Show, Don’t Tell

I think it’s easier to show you what I’m thinking than to describe it. Let’s say you’re visiting a friend who is quite the early adopter. She’s very tech savvy, but only as a user, and probably hasn’t written a line of code in her life. She works remotely, but not always from home. She’s some kind of a journalist … maybe? It’s one of those things where you’ve asked her a couple of times, but don’t properly remember the answer after each conversation. Anyway, she’s working from home this morning, clicking, clacking, and typing away at her keyboard. You walk up from behind and see that she’s reading some kind of an article from another news journal.

“What are you working on?” you ask.

“I’m working on a report,” she spins around, smiling. “Want to see?”

“Sure!”

She spins back around and closes the article, revealing the notes she’s compiled thus far. It’s a sparsely laid out sheet of text. Some words and word groups are gently emphasized and others are highlighted, obviously linking somewhere or other. It looks vaguely like a regular notes file, at first. She mouses over one of the word phrases and a network structure appears, linking article titles and previews in some kind of web. The links are larger for some articles than others, and she clicks on the first, most emphasized one. She appears to be setting the scene, somehow. She then closes the note file and opens her voice assistant. “Computer, can you open the article Helen sent me this morning?” The computer dings in affirmation and the site appears, on-screen. It’s obviously been heavily annotated, with notes scrawled in the margins. Some notes are linked, yet again, by directional lines running gently between them. They grow into the foreground as your friend mouses over them, but then diminish as she moves away, back toward the top of the article.

“You named your copilot Computer?” You ask, with an air of accusation.

“I just haven’t figured out how to give it Majel Barrett’s voice, yet,” she smirked back, selecting the header image. “Here’s the main event. An earthquake in Port Moresby resulted in several injuries, last week. An apartment building collapsed, but it was only a couple of years old. Thankfully, there were no deaths, but we’ve been asked to investigate the building owners and construction company to find any evidence of negligence. My job is to interview the victims, see if they have any immediate needs, and make a recommendation to my bosses about their disposition toward the building owners.”

“Big job,” you say. Glancing over the screen, you have trouble making sense of some of it. “What’s going on with these lines?” You point to the highlights on the screen and the arrows between them.

“Well, one of the hardest things to keep track of in these cases is who did or said what, when. This article mentions each victim between the time of the earthquake and their arrival at the hospital. It’s from a small local outlet, so it’s not as well written as I would have liked, but it has a lot of information. It’s just tough to parse. What I do is, I add a note — called an assertion — on anything that I know for sure, like when Mrs. X arrived at the hospital. Then I find out that they rode with ambulance crew B, and that supports their story for arrival time. Because the first assertion depends on this detail, I add another note on this detail and link it to Mrs. X’s arrival at the hospital. How did she get in the ambulance? Well, her neighbors said they found her collapsed on the sidewalk after escaping the building. I’ve linked her hospital ride to depend on the statements from the driver, which links to the names of the neighbors. See? I can piece together the whole story in a simple, easy-to-follow chain of verifiable events, even though it’s scattered around the article.”

“Wow,” you said, “what do you do with it?”

“Well, I’ll use it as a reference when I make my report. Probably, I’ll review it before each interview using Facts. I can create links that indicate dependencies across notes and documents, so the note that I use for the interview can build on this chain. I’ll be curious to add on what happened after they got to the hospital. However, I can also do this with it.” She clicked on the assertion and hit a key command. Instantly, the document faded away, and the dependencies rearranged themselves into a timeline. “Because I have times associated with these events and statements, I can compare events chronologically. So, if one victim says ‘I remember seeing such-and-such neighbor,’ I can check here and see whether or not that neighbor was still there — and their memory is accurate — or if, in fact, that neighbor had already been carted off. I used to use bullet lists for this sort of thing, but it’s so hard to hold two or three of those side by side and line them up — it takes just a little bit too much mental math for me.”

“Cool. When’s the interview?” you ask.

She opened her project planner, which showed calendar events, to-do lists, and her pinned notes in an easy-to-follow dashboard. “It’s at lunch, today. Want to come?”

“Sure.” Though it seemed she was being a little cavalier with her journalistic practices, lunch sounded great. As she opened up the source article, again, you pointed to the bold, blue-highlighted terms. “What do those do?” you asked.

“Oh, those. Those are the facts I mentioned, earlier.”

“Well, duh, those are facts.”

“No, I mean I marked them as things that I need to remember.” Looking again, you saw it was the name of a victim’s cat. “When I meet with them, later, I want to ask them about things that matter to them. I don’t want to use an outline or anything because I hate feeling stilted and impersonal. So, what I did was I highlighted the cat’s name and added it as a Fact. As I’m reading the article, the Facts will show up — usually between paragraphs — and ask me to try to remember them. I can also review them after I’m done with the article, to keep them fresh. It’s kind of like using Duolingo, but for the things you read. Haven’t you ever had that thing happen when you get to the end of an article, and you can’t remember anything you just read?”

“Yeah,”

“Well, this fixes that. I also like doing this between paragraphs.” She moused over the red, faded text between one paragraph and the next, and it sprang to life. “These are my notes, but they’re just comments — I haven’t linked them to anything special like I have the assertions. I use these to ask questions about the things I read. If I go back through on a second pass, they guide my research questions, like so. I’ll usually add more assertions on those articles and link them to the question I’ve answered … ad infinitum … but if I’m just reading something from beginning to end, I can just use them to rant about what I’m reading.”

You point to a set of ellipses next to one of the rants. “What does that mean?”

“Oh, those are a set of comments that Computer made and I asked it to save them for me. You see, Computer reads the same article, while I’m reading it. It even keeps track of where I am in the article when I’m reading something, so when I ask it a question, it’s almost like I’m talking with you about the article. Sometimes it doesn’t have anything helpful to say — just general stuff, or ‘I’m not sure if we can ascertain that from this’ — but it helps me. I’ve found one of the best ways to prepare for a presentation is to talk through the details out loud. Sometimes, real people just aren’t available, but the Computer does okay because it will ask me questions about what I’ve read, including my assertions, and force me to defend my point of view.”

“Wow, that’s great practice,” you say. Your friend closes the article and moves back to her note-taking app. As she mouses along, the cursor happens to move past the name of one of the victims. In the list of article previews below it, you see the article your friend had just closed. “Why did you link back to the article, if you already have it linked in your sources?”

Your friend shook her head, “oh no, that’s just a temporary link.”

“A what?”

“I didn’t link it, it’s something that the notes app generates. It works like Google, but it uses the note that I’m writing. So, if it sees something that I’ve thought about before, it will tell me so. It just happens that the victim’s name shows up in an article that is relevant to this one. Watch this.” She started writing nonsense about the victim’s cat. It immediately shaded over, and when she hovered over the shaded reference, a link appeared with a reference to the Fact about the cat’s name. “See? Pretty cool.”

“Doesn’t that get annoying, though, if you’re trying to read something?”

“Oh, for sure. Sometimes I just have to read my old notes from beginning to end, or I’m writing something and don’t want the distraction.” She hit a series of command combinations, and each link type faded out, one by one. She hit one last key command, and all of the text in the note disappeared, save for the sentence she was writing. The paragraph around it was still present, but it faded out to the point where you could only focus on the sentence in view. “Here’s another thing I like doing — especially for books — it blocks out everything but the sentence that you’re reading, so you don’t try to jump around too much.”

You could tell that it might take some getting used to — maybe some adjustments in reading or note-taking habits — but it was pretty cool. Besides, it seemed to work for her. “How long have you been working on this case?” you asked.

“Oh, just this morning is all.”

You rolled her eyes, wishing you could get that much out of a morning’s reading.

“Oh, we should get going,” she said, looking at the time. She closed her laptop and picked up her tablet, panning up to see all her open apps. The notes app, active on her laptop, had moved over, allowing her to continue running it on her tablet. “Do you mind driving while I read?”

“Sure …”

After the Interview

The interview went off wonderfully. Their lunch partner, a man in his mid-forties, had been more than happy to discuss the accident, his history of moving into the apartment, his current plans, and so on. At one point, the subject of his cat came up and you both offered condolences. Towards the end, the victim showed you both a document, a letter he had received from the building owner. You couldn’t help but notice that your partner didn’t bring any notes with her, and she didn’t take any either. On the drive back, you decided to ask about it. “What’s up with that?”

She tapped the rims of her glasses. “It all showed up here.” She pointed to her bracelet. “This thing matches my hand movements, so I could take notes as we talked. I didn’t have to do all that much of it, though, since these things can transcribe text as I’m hearing it. I’ll I have to do is mark which pieces of text I want to save. Even that document at the end was easy. My glasses picked up the text content and all I had to do was select the range I wanted, copy it, and save it. I might have to go back through and organize it, a bit, that’s all.”

“And what about the notes you were reviewing, earlier? I thought you would have brought your tablet.”

She rolled her eyes. “It doesn’t matter what I bring, it’s all here. I was able to remember most of the important stuff — their self-reported chronology of the accident, for instance — because Facts take advantage of how our brain stores declarative information. The only thing I almost forgot was the cat’s name … but because we were discussing the cat, my glasses recommended the note to me based on the context. I don’t keep those kinds of suggestions on because they can be distracting, but I toggled it when I realized I didn’t have the cat’s name. After that, it was only a wrist-monitored tap away, in my peripheral vision.”

“What do you mean, it doesn’t matter what I bring? You can’t possibly store anything in those glasses.”

She shrugged. “Actually, I think that it has quite a bit of onboard storage. But that’s kind of a silly question these days, because my documents are distributed across all of my devices. I’m never sure from day to day what something is ‘physically’ on because it just looks like one big file system to me. If something’s not physically on the glasses, it will automatically go and get it from my laptop, my tablet, or wherever it happens to be.”

She seemed to be growing tired of technical questions. Leaning back in her chair, she spoke: “Computer, could you please play Vivaldi’s Spring on the car radio?” The sound of strings fills the car, and you make your way back to the apartment.

Demo Summary

This saga demonstrates several key features. Firstly, as users, we’ve organized our information. The note-taking app our imaginary friend uses is inspired by Linus Lee’s Hypertext Maximalism, but each of the interfaces we see is enabled by something like a personal search engine. The engine indexes our notes, the articles and books we read, our calendar, our contact list, and anything else that is ours. All of this data gets stored and distributed across our devices as part of a single, ubiquitous file system. The important part here is how we use our search capabilities once we have them. As in Linus’ note-taking demo, it prompts us to make associative connections to information we’ve already vetted as valuable. We might automatically make these connections on our own. However, the system might also prompt a connection that we otherwise would have missed. On top of this, the conversational companion from our demo, “Computer”, has access to this index as well. Likely, its language models are extended using all of these notes and their linked contents — prioritizing relationships that we’ve marked as valuable. As our personal, conversational copilot evolves and digests content that is obviously important to us, it will get better and better at helping us solve problems and reach the goals we set for ourselves. We choose what is important and what isn’t. One way we do this is by selecting facts we want to remember and using SSR and established memory techniques to internalize them. In addition to just adding stuff, we should exercise some kind of “information hygiene” and delete things that are no longer useful to us.

The devices themselves don’t matter — it’s our personal web of information that we create that matters. For any given problem that we choose to solve, this web or a subset of it represents the problem context. The devices in this demo simply give us ways to shape it, work within it, and direct it. For instance, it didn’t matter where our journalist opened their notes app — the information was right there. Here, I’m imagining something akin to Apple’s handoff, but more in-depth, where a list of running applications is spread across all of your devices, and any one of them can present its interface on whatever magic slab of glass you happen to have handy. Again, the devices don’t matter. The same can be said of the glasses in this example. They might be futuristic and cool, sure, but they only provide an extra access point to the web of information our protagonist is crafting. This is one of my favorite examples. I would love to be able to use this in real-time on real books, interacting with them and taking searchable notes as I go — as opposed to scrawling pencil marks in the margins.

Build memories, build webs of associative understandings. Walk this web often. Practice the workflows and methods this web enables or encompasses. In doing so, we put the power of recommendations and reinforcement to use in our daily lives.

Implementation

Some parts of this demonstration already exist. Most of them don’t. Besides the technological interfaces that we’re waiting to see develop, we also need methodologies, training, and helpful defaults. We can start working on these, now.

Of all of the companies that I’ve listed thus far, Apple is in the best position to make these kinds of systems. Firstly, they have a long history of focusing on empowering tools, not exploitative ones. The Xerox PARC demos mentioned at the outset are in its DNA, inspiring the first Mac. The HyperCard program, circa 1987, not only presaged the Web but focused on many of these memory/associative enhancements we’ve discussed. Furthermore, while most personal computers or smartphones have a “search” feature, few are as long-lived or deeply baked into the system as Spotlight. Apple devices already share a common storage format, APFS, and ubiquitous access to information, via iCloud. By leading with Focus a couple of years ago, they have demonstrated a willingness to take productivity killers like notifications head-on. They are free to do so because their revenue model is based almost entirely on hardware sales, not advertisements. If they go ahead with introducing a hardware subscription plan, we’ll be even further along, weaning ourselves off of a device or feature-dependent mentality, and moving towards a problem-solving mentality.

Realistically, though, this may be prohibitively expensive for some. Apple devices are not cheap, in part because their distribution is only marginally subsidized by other revenue streams, such as services. I can imagine other means by which we can bring the cost of high-quality hardware down. For instance, instead of selling a user’s information or selling their attention on ads, why not re-sell the electricity and internet access they buy? Most of us plug our smartphones in overnight, leaving them powered on and inactive for any number of valuable hours during the night. What if an OS team that just happened to also control the application runtime layer (looking at you, Google), put those idle hours to good use? We’ve already demonstrated the efficacy of highly distributed heterogenous computing clusters, such as through the Folding@Home project. Why couldn’t an enterprising software company rent out this kind of scale and pass the cost savings on to consumers? This approach would even be improved if we moved away from centralized data storage models. Tim Berners-Lee is currently working on a new company and set of protocols to put users’ data front and center. It uses a thing called pods. Why couldn’t we host our pods at home? Could we include web pages in pods, and reclaim the entire architecture of the now-centralized web? If everyone had a pod-approved router running at home and maintaining their data, couldn’t this powerful device work on enterprise workloads during lulls, too?

Conclusion

We won’t succeed in reclaiming our Information Revolution overnight. This is a big project. Furthermore, as Zamyatin said, Revolutions are infinite. Even if we succeed in building powerful systems and becoming great problem solvers ourselves, the job may never be done. We might, at best, contribute to the legacy of problem-solving and leave an example for the next generation to build on, as Bush, Ashby, Englebart, Berners-Lee, and others have done before us. It might be that the best we can hope for is to get things back on track … for a little while. But we can do it and we can start today. We can take control of our personal information library and organize it, keeping it as one does a garden. We can vote with our wallets — pressuring consumer electronic companies to provide systems that empower us instead of exploiting our attention. We can be intentional in selecting and crafting the workflows we use in everyday life. Lastly, we can dream big. Let’s do it. Let’s get to work.

--

--