The information we publish on the web pales in comparison to the totality of information inside our heads. Companies like Jelly and Quora have jumped on the opportunity to expose that knowledge in a people-powered search engine, but if we take a step back, there’s a more fundamental question to be answered: what about making tools that help us better harness the stuff of our own minds? Can we use technology to fundamentally think and express more powerfully, and not just to make our lives marginally more convenient (one pizza at a time)?
The rise of software and the web has opened up many more potential dimensions for idea conception and expression. Yet the narrow digital channels we have today — whether self-expiring video or 140 character snippets of text — barely scratch the surface of what’s possible. As far as pushing the limits of what web-connected software can do (for us), we’re still splashing about in the kiddie section of the pool.
This is not to deny the profound emotional significance in the act of experiencing the world through another person’s eyes, or in the value of realtime access to global events. But it would be comically absurd to replace high school literature and writing with training courses for how to express oneself on Twitter, Facebook or Snapchat — because we’d be foregoing opportunities to grow the depth of our thinking. Yet this sort of deprivation is exactly what happens when we only use software to think about and communicate simple things faster and further. In your pocket and on your desk sit devices already brimming with the latent potential for interactive, multidimensional idea expression, but we’ve instead turned them into buckets full of single-purpose, push-button appliances.
Rewind a few hundred years, and the hot technology of the day was the printing press. Like the Internet, it magnified information’s reach by leaps and bounds. But it did something more for humanity than spreading further our existing knowledge. It also enabled mass literacy, and as communication theorist Harold Innis — a colleague and forebear of Marshall McCluhan — put it:
The art of writing provided [humans] with a transpersonal memory. [Humans] were given an artificially extended and verifiable memory of objects and events not present to sight or recollection. Individuals applied their minds to symbols rather than things and went beyond the world of concrete experience into the world of conceptual relations created within an enlarged time and space universe . . . Writing enormously enhanced a capacity for abstract thinking . . .[Human] activities and powers were roughly extended in proportion to the increased use and perfection of written records.
In other words, the externalization of thought into written form allowed us, as content creators, to think more ambitiously, not just more conveniently. Alan Kay notes in his essay “The Real Computer Revolution Hasn’t Happened Yet”:
The press was first thought to be a less expensive automation of hand written documents, but by the 17th century its several special properties had gradually changed the way important ideas were thought about to the extent that most of the important ideas that followed had not even existed when the press was invented.
Even the more powerful tools in our daily use like word documents, spreadsheets, and slideshows are not truly platforms borne of the digital age, but are instead incremental evolutions of their analog peers: paper, accounting ledgers, and projector slides, respectively. And other, more specialized tools like AutoCAD or Photoshop elude mainstream adoption with their vocational specificity and steep barriers to learning. Just as the early years of film were essentially just theater performances in front of a still-sitting camera, so too are we merely recreating pre-digital forms of expression, albeit with enormous differences in speed and reach.
The key to our greater potential is in our roots: humans are innately skilled at creating and using tools. More so than what we eat, or how we dress, or maybe even the company we keep, we are what we create. What we need are better general-purpose software toolkits that delicately pair the highly versatile capabilities of technology with the equally dynamic capabilities of the human mind. Given a set of building blocks with transparent, easily learned, and predictable behavior, we’re able to assemble them in unique and clever ways to achieve results with emergent complexity (often to the astonishment of the kit’s creators). Any such kit of building blocks comes with a unique space of possibilities, and given an environment friendly to experimentation (i.e. when mistakes are painless, they’re no longer mistakes), we will explore its greater depths.
For an illustrative example, one might look to Hypercard. Precursing (and inspiring) the web, Hypercard enabled anyone to create richly interactive programs, ranging from best-selling games to business inventory management systems to the creative explorations of elementary school children.
Rather than seeing a false dichotomy between complexity and accessibility, Hypercard recognized that complexity is not a bad thing. A real guitar is more complex than Guitar Hero, but try telling Hendrix to trade in his Stratocaster, or Basquiat to draw exclusively via Etch A Sketch. When complexity is at your disposal as a means to express your desires, needs, emotions, and ideas, it is sophistication. And the best systems are designed to expose their complexity in easily-digestible layers like an onion and not a knot — they must be simple to start, but grow with you instead of imposing an artificial ceiling in the pursuit of simplicity.
It’s no accident that Hypercard was designed by one of the greatest such system designers — Bill Atkinson, who co-created the Macintosh GUI by iteratively crafting visual metaphors for each and every complex facet of computing. Many of these metaphors are so intuitive as to feel almost obvious in hindsight, but this is a reflection of the elegance, not inevitability, of their design. The result brought the personal computer experience out of the folds of hobbyists and hackers to the mainstream.
Sadly, Hypercard itself rests in a digital grave somewhere in Santa Clara. Despite enjoying an intensely passionate and wide-reaching fanbase, it suffered from bad timing — launching shortly after Steve Jobs’ 1985 forced departure and, under John Sculley’s reign, spun off into a confused subsidiary company where it slowly withered away during the decades before Jobs’ return. Under different circumstances, it might have thrived and become one of the most widely used and transformative pieces of software.
The lack of open-ended flexibility like that found in Hypercard is a major contributing factor to the app fatigue that has now undeniably beset us. It’s much easier for developers to create inflexible, single-purpose apps than a lego kit for individuals to shape around their idiosyncratic needs. Instead of solving this underlying meta-problem, we’ve been trying to compensate with more.
Users must be able to tailor a system to their wants. Anything less would be as absurd as requiring essays to be formed out of paragraphs that have already been written.
-Alan Kay, Computer Software (1984)
To build an open-ended creative kit is a colossal undertaking — the magnum opus of software engineering and design (not to mention the challenges of marketing horizontal products). It requires more than just the deterministic application of rules about colors, shapes, or typography. The designer must take it upon themselves to cull through the infinitely dimensioned space of possibilities and determine which such dimensions should be hidden away from the end-user versus metaphorized into user-controllable degrees of freedom.
In many ways, this process is antithetical to the approach taken in recent years with big data and deep learning . Those disciplines focus on the enablement of machines operating in a pure sense — independent of human cognitive processes, and therefore also of human idiosyncrasies and biases. If big data and deep learning take inspiration from psychology’s behaviorism, building a creative kit has more in common with the neuroscientist Oliver Sacks.
Sacks, who sadly passed away last year, studied and relished the unique humanity of each of his neurological patients, rather than eschewing their individual narratives in favor of more generalizable lab data. He saw the mind not as a simple mechanism stimulus response, but instead “an interactive, adaptive, and endlessly innovative participant in the creation of our world”.
The design of creative kits depends on the designer’s ability to see humans not as the squishy, error-prone counterparts to the shiny metal stars of the show, but instead a source of incredible, if quirky and non-linear, creative potential of their own right. The design of these systems must fully take into account the peculiar affordances of human cognition, perception, and preconception.
Information designer Edward Tufte taught us the ways we can leverage humans’ uncanny ability to absorb, process, and pattern-match a high density of information — if presented in a form carefully crafted to take full advantage of our millions of years of visual cortex development. A creative kit designer must apply the same design process to an even more multifaceted problem space of interaction possibilities. (To be discussed in more detail in a future post!)
In its earliest days, literacy was exclusively in the domain of the elite, who used it to command authority and influence. As we well know, its proliferation to the masses would result in the Enlightenment, spreading reason, thought, and human rights beyond the narrow purview of the few. (It would also, of course, result in the replacement of feudal systems with governments founded purely on the written word.)
In today’s world, AI and robotics continue their steady march on human jobs, leaving economists to scratch their heads about what our future jobs will look like, or whether they’ll exist at all. The economic solution is rather simple, at least in the abstract: increase labor productivity for all. We can shape technology to look more like democratized literacy — amplifying the abilities of the general public, rather than having its full potential occluded from all but the technologists.
Tim O’Reilly put it succinctly: Don’t replace people. Augment them.
Big data, deep learning, and burrito-delivering drones have seized the tech sphere’s attention for the moment. But the creative intellect inside every person’s head is incredible in ways that current forms of AI are not even close to matching anytime soon. In our fervor to build machines that can improve themselves, let us not forget about the immense and possibly greater opportunity to create machines that improve us.
 These systems aren’t entirely antithetical to human-computer symbiosis. For instance, though the trained internal weights of a deep learning system are not meant for human comprehension, we can make a deliberate effort to build more affordances for human interaction at its outer layers.
Gary Kasparov notes that in the years following his highly-publicized defeat by Deep Blue, amateur humans assisted by weak computers were able to dominate purpose-built supercomputers in chess:
In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers…
Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.
The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
Further readings for the interested:
Bonnie Nardi’s A Small Matter of Programming
Howard Rheingold’s Tools for Thought
Brenda Laurel’s Utopian Entrepreneur
Bret Victor on the future of programming
Personal computing pioneer JCR Licklider on human-computer symbiosis
In no particular order, thanks to Howard Rheingold, Brian Christian, Shani C. Taylor, Andrea Coravos, Kasra Kyanzadeh, Patricia Li, Matt Bush, kim holleman, Stephanie Engle, Katherine Duh, Zoelle Egner, Kevin Mahaffey (and anyone else I’m forgetting) for their feedback on this piece!
Join 30,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact the way they work and live.