The Mother of All Demos 1968–2018 : Boosting our collective capability in coping with complex, urgent problems
The below text mirrors a live demonstration of else given by Ted Hunt at Somerset House Studios on 12/Dec/2018 on the 50th anniversary of The Mother of All Demos. The 30 minute demonstration was split into a triptych of three acts, Yesterday, Today, and Tomorrow. The core else Demo begins in the below section Tomorrow. Yesterday acts as a brief history lesson and contextualisation, Today acts as a critical perspective on our current computational paradigm.
This demo will be complex, deliberately and knowingly complex. So complex that even, as it’s author, I don’t fully grasp what I am attempting to argue. It is complex because the world is complex. And it is complex because it is critical that we begin to understand, and stand humble in front of, the difference between complication and complexity.
History tends to be written by what it is that prevails. Technological history is no different. In tracing the popular history of modern computing we are often guilty of omitting key moments and individuals.
A Brief History of Technological Tomorrow
If we are considering Douglas Engelbart’s 1968 public presentation as The Mother of All Demos then we should consider Ada Lovelace as The Mother of The Mother of All Demos. For it was Lovelace who envisaged that Charles Babbage’s mechanical general-purpose computer, the ‘Analytical Machine’, might be capable of far more than pure calculation and that it could be programmable and as such computer “might act upon other things besides numbers.” From these roots the gradual progression of computing began. Like all technologies computation wasn’t a purely benign tool, during World War II the use of computers by the Manhattan Project gave that ultimately resulted in the Atomic bombings of Hiroshima and Nagasaki killing 129,000–226,000 people. It was this act that led Vannevar Bush, head of the U.S. Office of Scientific Research and Development (OSRD) during WWII, to call for technology for understanding not destruction in an impassioned article published in July & September 1945 entitled As We May Think. And it was this exact article that was read by a young Douglas Engelbart while he was serving as a Navy Radar Technician on a small Philippines island, in a tiny hut on stilts in 1945. Dr. Bush call for ‘a new relationship between thinking man and the sum of our knowledge’ significantly influenced Engelbart’s own world-view and on returning to America after the war Engelbart authored his guiding philosophy reasoning through four steps that ‘computers could be the vehicle for dramatically improving’ his own ability, and our collective ability to, ‘make the world a better place’. A key observation here is that computers would be a means to a specifically egalitarian and nonviolent end, rather than a merely a means of ‘progress for the sake of progress’ or ‘progress for economic growth’.
From this guiding philosophy, written in 1950, Engelbart authored a pragmatic vision of computing published in the 1962 seminal paper ‘Augmenting Human Intellect : A Conceptual Framework’. And from this paper he turned to realising this vision through a single integrated computational system embodying his vision for a computer that would “boost mankind’s capability for coping with complex, urgent problems”. It was this computational system that was demo’d on 12 December 1968, at The Mother of All Demos, a 90-minute presentation that essentially demonstrated almost all the fundamental elements of modern personal computing.
It is the subsequent events from this demo that initiate modern computing as we know it. The demo was attended and enthusiastically received by Butler Lampson who went on the found Xerox PARC in 1971 where many of the hardware and software inventions were adapted and iterated. It was Xerox PARC that Steve Jobs visited in 1979 and immediately re-appropriated the fundamental concepts into Apple’s Lisa and Macintosh computers to be launched to in 1983/84. Incidentally it was The Whole Earth Catalog, published by early Engelbart team member Steward Brand, that Jobs re-appropriated Apple’s pragmatic brand image along with his own personal mantra to ‘Stay hungry. Stay foolish.’ A year later in 1985 Microsoft launched Windows, again a direct re-appropriation of Engelbart’s innovations and affording them the ability to dominate 90% of the personal computer market. Then in 2008 Google launch the smartphone OS Android, which by 2011 had become the dominant global device OS. And now, in late 2018, we live in a world of over 5 billion ‘users’ where the very interpretation of what a user is has mutated dramatically since 1968.
What Ada Lovelace, Vannevar Bush, Douglas Engelbart and Steward Brand all had in common was the appreciation of the difference between complication and complexity. Yet it was complication that won out in the progression of computation. And it is the limitations of linear complication that still defines our understanding of our most urgent problems.
If we consider The Mother of All Demos as an acorn that grew into a multi-branched tree then we would find a strong trunk and many healthy branches. The fundamental hardware and software first introduced by Engelbart formed the basis for what are now ubiquitous household items. The affordances and adaptations of computation have saturated into all most all industries, activities and media formats. The branch that saw the least progress and consideration, however, would be the very vision that drove Engelbart’s work; the ability for computers to boost our collective capability for coping with complex, urgent problems. We have made many advances in computation, but to look objectively at the problems humanity faces, the current solutions we have in place, and the overwhelming urgency of those problems then computation seems yet to have delivered on Engelbart’s post atomic bomb vision for a better world and increased humanity.
And so it is this branch, and it’s associated complexities, that I now specifically wish to revisit..
If we were to return to Douglas Engelbart’s original vision to ‘boost our collective capability for coping with complex, urgent problems’ what comparative and equivalent computation attributes might we imagine? Here I will outline my own interpretation of five emerging and speculative fields of computation that might steer us back towards, rather than away from, Engelbart’s original vision.
1. Intelligence Augmentation (IA)
Today, in the technologically developed world, we have seemingly accepted that Artificial Intelligence (AI) is now no longer just an ambition or imaginary, it is an inevitability and an actuality. As the media, industry and politicians rush to hype and accelerate the coming age of AI a key inversion still juxtaposes the rise of AI. And that is IA, or Intelligence Augmentation as originally defined by Engelbart.
The pyramid of knowledge that has enabled our adaptation and survival for millennia has now seemingly been replaced with a single notion that simplifies the complex interdependencies of how we obtain collective knowledge and wisdom. That notion is the tech colloquialism of ‘smart’. If something is smart it is now deemed to en-capture all elements of the pyramid with little question. We are creating smart technology, smart systems, smart cities, and hiring smart people to work for smart companies. Smart has gone from being a dress-code to a universal description for what it means to be human in the space of just a few years.
We now inhabit a society where we don’t even question the paradox of ‘teaching machines how to learn and children how to code’. A state of the art quantum computer of 2,000 qubits retails at $15,000,000, yet we are each born with an embodied quantum computer of unknown qubit capacity. The human brain is arguably the single most sophisticated technology we know of in the universe, but we seem to be giving up on it’s potential before we have even begun to realise it’s potential.
The critical question we need to ask ourselves, in my view, is this. Is technology making machine more like man, or man more like machine? Or put another way are we in a utopian accent to a world where AI excels our own limited intelligence, or dystopian decent to a kind of ‘mean intelligence’. And if so might a urgent pivot in how we consider, create and use technology allow for the symbiotic coevolution of man and machine that Engelbart envisaged?
else is an alternative search engine platform that believes our technological tomorrow lays in yesterdays received wisdom. Here in this case exemplified by the received wisdom of Ancient Greek Philosopher Socrates, paired with Engelbart’s conceptual framework for Augmenting Human Intellect.
The affordances of this means of augmented web search can be illustrated through the differing perspectives given on Google’s 2015’s most asked what is question: ‘what is love’?
2. Dialectic Design
As Scotty show us human-computer interactions have already come a long way since the initial input devices of keyboard, then mice, then touchscreen and now voice command.
Professor Richard Buchanan identifies Dialectic Design as the 4th Order of design and a direct interface with our thoughts. Technologists and designers across the world are now working to advance the field of dialectic design.
The current paradigm of dialectic design has arguably become stunted in ‘call and respond’ interactions. Technology’s current interpretation of our questions by far favour resolution rather than discussion.
Put simply an information retrieval model, such as a web search engine, transform a query into a resolution. The question is what drives this transformation? Completing a Google image search on the term ‘beauty’ we can immediately highlight the critical limitations of query-resolution dialogues. A Google image search on ‘beauty’ renders endless results of a very specific interpretation of a particular type of beauty that is; human + female + young + mostly caucasian + of the specific body type + contained to the face + mostly exhibiting the explicit application of beauty products.
The enduring reality of dialogue is that it has never been a matter of simple call and response. Human dialogue has always manifested and flourished as an exploratory and mutual exchange. And so we should reflect this in our definition of dialect design and dialogue based exchanges with technology. Rather than allow prevailing paradigms to define singular resolutions (such as a definition of ‘beauty’ by the global cosmetic industry) we should allow for plural discussion and conversation. We should question our questions, and question ours, and others, answers. We should even remain sceptical that there is even such things as singular objective ‘answers’.
And so every search query made upon else returns first an opening discourse, and then related extended discourse queries intended to directly begin the process of exploratory discussion. As such the search engine acts as a means to searching for meaning rather than a means of simply searching for answers.
3. Distributed Access Modes
The age of centralised systems is now being challenged. A decentralised web, or DWeb, is currently being built as an alternative to the monopolies of Silicon Valley. And peer-to-peer networks and block chain technologies are driving entirely new distributed infrastructures. What if the same distributed notions were applied directly to human-computer interfaces and access modes?
else is a platform rather than a technology, a platform the enables the open and limitless interpretations of the transformation of web search queries. Each model forms part of a growing spectrum of Modes of Thinking.
Examples of such Modes of Thinking might draw upon existing mental models, belief systems, problem solving methodologies, proverbs or even quotes.
The intended augmentation of this affordance would be allow individuals to increasingly transcend mental states, with particular focus on breaching the limitations of the dominant W.E.I.R.D mental state defined by Yuval Noah Harari in Homo Deus.
One such example would be a search engine based upon the Ancient Afghan prover ‘No rose is without a thorn’. The proverb has been passed down through generations in order to encourage the ability of allow for a level of appropriate scepticism. Returning to the earlier pyramid of knowledge in can be argued that such proverbs sit near the top, and act as received wisdom rather than mere information. As such transferable lessons can be taken across culture and definitions of what a technology is. The West’s struggle to adapt to and cope with Fake News, Conspiracy Theories, and Media Bias are now beginning to be seen as a problem of the popular application of the very same appropriate scepticism embedded within this Ancient proverb .
4. Transparent Transformation
Computation, like search engines, is driven by it’s ability to transform an input into an output. Since the dawn of personal computing most individuals are completely unaware of what drives such transformations. In an age when we increasingly depend on computation for ‘decision making’ knowing what is driving the information it gives us is going to be not only desirable, but essential.
As such else enables individuals to access and amend the transformation methodologies that drive each Mode of Thinking. else doesn’t propose to be an unbiased technology, for every technology is biased, it simply proposes to transparently wear it’s biases on its sleeve. And further to allow individuals agency over deciding what those biases should and could be.
The result of such a revolution on computational logic might initiated to beginnings of an a age of transparent transformation, or at least semi-transparent/opaque rather than entirely opaque transformation.
5. Nonlinear Paradigms
Finally, and most critically, we might begin to envisage departing from our current linear paradigm. Within mathematics linearity equates to a straight line, usually drawn through a mapped set of data points. Hence nonlinearity is a non straight line.
Our current paradigm has consistently fallen into to interpretation of binary oppositions, things are either one thing or another. In structuralism a binary opposition is seen as a fundamental organiser of human philosophy, culture, and language. Media, politics, technology, entertainment, and consumerism all rely upon binary oppositions to justify a linear paradigm.
To decouple binary paradigms we might begin to fragment, juxtapose and mirror opposing ideas through a “third order” that mediates between opposites. Here we unpack a third order of the spectrums between the what is traditionally seen as the known and unknown into a matrix rather than a binary.
Such a model might then logically lead to a search engine based upon the Johari Window, a model popularised by former United States Secretary of Defense Donald Rumsfeld in his use of the phrase “there are know knowns”. When applied to a search upon the term ‘Brexit’ we might now be able to find new meaning and reasoning in the Open Arena (Known Knowns), Hidden Facades (Known Unknowns), Blind Spots (Unknown Knowns), and an conscious appreciation of the Unknown (Unknown Unknowns).
Finally let us return to the appreciation of complexity over and above complication, with the help of contemporary philosopher Slavoj Zizek deconstructing Rumsfeld’s use of the Johari Window as such; “what you DON’T KNOW that you KNOW controls you but you don’t control it”. It is here, in our blindspots, that we may eventually find a means to come to understand and cope with complexity. Or indeed it here, in our blindspots, that we have fallen prey to the cynical manipulation of currently technology employed to industrialise the extraction of data generated by our human-computer interactions for purely for financial profit.
A Post Script
Where all this leaves us, other than in complex confusion, is in many ways where we started with Douglas Engelbart’s 1969 Mother of All Demos. On reflecting why Engelbart’s vision never came to fruition early collaborator Alan Key summarised “‘Engelbart, for better or for worse, was trying to make a violin…most people don’t want to learn the violin.” Our aspirations to make, learn and play the technological equivalents to violins shouldn’t be curtailed by the consistent simplification of complexity to complication that we currently live within. Writer and artist James Bridle recently captured the juncture we now stand at with computers, a juncture that will be decided in the difference between asking questions and receiving answers.
According to the Smithsonian Institute early humans began using tools 2.6 million years ago in the form of unprocessed stone hammers and sharp stone flakes. It took a full 1 million years (30,000 generations) for humans to realise that they could sharpen the basic stone hammer into a hand axe, and then took another 1.4 million years before we thought to put a handle on the hand axe. Once we had paired the stone axe with axe handle it was then merely a few hundred thousand years before we were stood in front of the steam engine.
The question now remains how long will it take us to get a handle on the full and true potential of computers?