The Science of Being Wrong

Lorand Kedves
10 min readNov 9, 2019

--

IT today is in contradiction. AI is both source of hope and fear; programmers are modern magicians but can be trained in 3 months; “the future is the IoT”, but hordes of hacked webcams can threat the Internet; despite the thousands of platforms, languages and solutions, deadline and quality problems are permanent. Hundreds of different augmented text editors are used for development, but the only global, flexible information management platform is the spreadsheet editor.

It seems that IT focuses on the always changing coding tools, instead of the real aim: a higher level, independent and dynamic knowledge representation. But before offering a solution, it is important to find out: is there any previous effort made in this abstract direction?

Following my personal meta-programming experiments and PhD research, I have found some information scientists, engineers of the early decades. These mostly forgotten pioneers, although worked after WWII and in the shadow of the Apollo program and cold war, still had greater flexibility and scientific precision, they were not limited by industrial standards, patents or profit targets. They have funded today’s IT and predicted our information-related individual and social problems too.

For Profit Science

What the world would look like, if we called mathematics “Calculation Technology” or CT?

If a few global monopolies created incompatible representation for all elements from numbers and operators to geometry and differential equations? If CT students were taught by introducing them one or more CPs (Calculation Platforms), instead of the beautiful, generic, overall network behind? If their job was to safely (sometimes mindlessly) repeat solving the same problems with constantly changing tools and platforms?

Maybe, every second year, a new CP would appear, promising that this is the best, and taking the loudest critics from the previous one as fans. Maybe, there would be

  • efforts to create “Cross-Platform” tools — which are in fact platforms themselves, but narrowed to solving a specific area, and competing with others of the same kind;
  • cults following different CMs (Calculation Methodologies) promising that they will solve the overall communication, quality and management problems;
  • an increasing need of hastily trained “calculators”, and experience would be worthless because the quick change in the current platforms;
  • an increasing “technical debt”: errors (or even intentional fake results) lurking in the basement, making the whole field of CT less solid every year.

Maybe, it would become a kind of religion: a promise that a super-calculator will emerge and solve all problems — and a fear of… a super-calculator, who could solve all problems, but what will it really do?

This article does not answer the question, because
1) it should be obvious to the reader that it is demonstrated by the fate of IT (Information Technology) today, and
2) it had been thoroughly analyzed by one of those unsung heroes, Douglas Engelbart. It is like distorting our symbol manipulating tools that drastically decreases out ability to describe, understand and deal with problems beyond the basic level (raising the question if it worth at all). Following his brilliantly simple experiment, it is like trying to work in an organization where you can only write with a pencil tied to a brick. [1]

Unsung Heroes

The following section gives a short summary of “information science” history through the aims, contribution and predictions of outstanding researchers. When analyzing our present, and inventing our future, their results should be considered at least as much as of the far better known, successful figures of information technology and business.

Vannevar Bush literally had an outstanding role in our history: he was the head of allied scientists during World War II, where many technologies we use today were initiated. In July 1945 (just after the war ended), he outlined the next major task ahead of mankind in the article As We May Think [2]. This aim is to improve how we collect, organize and interact with our knowledge, because scientific experiments and results appear in such amount that even fellow researchers cannot keep up with the progress. That raises a fundamental limit to all research activities and can lead to losing our ability to improve. He coined the name “Memex” (memory extension): a system that enables direct personal interaction with the collected knowledge of humanity. As information scientists, we could think of this article as our constitution: starts with a great, abstract need, provides a then science fiction answer, and shows how realistic improvements can lead to it if we keep the aim.

J. C. R. Licklider was the leader of a US national research conducted in 1961–1964 on how this “Memex” will operate in 2000, both influencing the development and using achievements of information technology. Originally, J.C.R. was a psychologist who examined the human-machine interaction. He was chosen for his previous pioneering research and knew how to motivate the more conservative engineers out of their comfort zone and into ground breaking inventions. For example, by setting the aim (in 1963!) to create an “Intergalactic Computer Network” paved the way to our global Internet.

The result is a book: Libraries of the Future [3], a thorough analysis from the structure and amount of “solid knowledge” (down to the estimated number of characters), the required speed and features of the supporting systems and human interfaces, including the internet and all elements of a modern GUI.

Douglas Engelbart dedicated his life to a single purpose: collect and invent all technologies that are required to materialize the information system supporting human intellect and cooperation. His aims were set in Augmenting Human Intellect [1], an outstanding scientific article. In the first part, he presents an abstract model of individual and cooperative human reasoning (quite like the Neumann architecture for computers): its components, improvement, the effect of conscious modifications, and using computers as augmentation tool. Then he gives a theoretical demo and a multi-level research plan.

During the next years they have created the NLS (oN-Line System), the subject of the “Mother of All Demos” [4] in 1968. This is like a time travel: a live videoconference over shared editing the same document, where they smoothly switched among specifications, high level architecture and direct machine code. He also explained how the human mind adapted to controlling the “bug” (mouse pointer), following the view instead of the hand position or movement in their experiments with the computer mouse he had just invented.

His vision was the A-B-C development process, where A: the actual operation of any organization is supported by an on-line information system; allows B: the continuous analysis and improvement on real data. Elements of B are basically the same regardless of A (like: solving communication issues, organizational hierarchies, supply chains), therefore there is a C level above, where such global, “improvements of improvements” can be made. Details can be accessed in many lectures, like the Boosting Collective IQ presentation [5].

Theodore Nelson focused on personal knowledge management and presentation. His project, Xanadu is a way to manage one’s own presentation based on that it always uses and reorganizes other (text or media) sources and adds some new content. Therefore, it is essential to be able to walk around both directions of references to see how an idea came, and where it lead others. This approach also contained copyright and micro-payment management and was close to realization but finally lost against a simpler solution that we know as Internet today.

He is a great presenter, proven by many lectures available online, and a visionary scientist. His Computer Lib [6] book inspired the first generation of computer enthusiasts; disruptive thinking can be illustrated how he explains our modern computer interaction [7] in 1979, when the “personal computer” term was mere fiction due to the actual price, size and performance.

Neil Postman focused on the changes in the human mind and community operation, that all major technology improvement has created in the past and are creating in the present. First, he analyzed the effects of mass media and television, later he turned to how we use computers as a medium, how it changes our vision, cognition and interaction, in several books like Technopoly [8].

He was known as a “luddite”, though he only proposed a conscious debate, arguments not only focusing on the assumed benefits but the possible dangers of this radical step. Just like all human beings, information scientists can be blinded by their profession and power, therefore it is very important to respect and listen to wise outsiders who ask critical questions. We should seriously consider his 6, later 7 questions about integrating new technologies into our communities and individual life; and our world might be a better place if more people knew his graduation speech about Athenians and Visigoths [9].

Being Wrong in a Smart Way

By the public opinion, a researcher is a perhaps slightly antisocial person, who after a few years of self-induced sleep deprivation, becomes a global phenomenon, and whom everyone will understand and admire. However, in a highly advanced technological civilization like ours, researchers are people obsessed with that an “obvious truth” is wrong but are open to devote their whole lives to learn how and exactly why we think and do that way and based on this knowledge they look for a better answer. Researchers accept the highest chance that their all efforts will go in the wrong direction and give nothing more than experience to even smarter or luckier researchers. Or more likely: their lifelong work will become a collection of misjudged partial results from which a better understanding and a loud success will come — for others. Most true research, regardless of the highest scientific precision, is wrong — but the only way to give chance to a breakthrough.

By the public opinion, academy is the “source of truth”. But the academy that created our world today and would be essential to keep up our civilization is not that solid institution that we like to imagine. It is an ambivalent organization that on one side gives precise solutions, but immediately questions them because today’s truth is quite likely wrong — just like all the previous best answers until now.

By the public opinion, IT is operated by some immortal giant companies, yet it was invented in some mysterious garages by some famous individuals. In fact, Vannevar Bush realized that not only our current answers can be wrong, but the very way we store and improve them, and this stops us going further. J.C.R. Licklider gave a precise prediction of a global, personal interactive information system that must replace our static, paper-like knowledge management (and the possibility of misunderstanding it). Douglas Engelbart presented a thorough analysis of the tools and methods of intellectual augmentation, demonstrated how it has been affecting us for centuries and how it can be improved exponentially by using computers as a thinking and communication medium. Ted Nelson showed how effective it can be, but Neil Postman explained that consciousness and critical approach should be essential when integrating any new medium both on individual and community level. All with solid academic background and in professional research labs, analyzed the many ways of being wrong and how to improve over it.

Summary

Regardless of understanding, or even knowing about it, the very foundation of our information era is knowing and considering

  • the inevitable limits of our understanding;
  • the importance of the knowledge handling tools we create, use and improve;
  • our individual and social sensitivity to them.

Therefore, we need, and our future depends on informatics: the “science of being wrong”. It gives us hope and a way to gradually become better in time, instead of sinking into the so much motivated illusion of being perfect now.

Most of the fundamental IT researchers focused on scientific and technical aspects and were quite skeptical about the human factor. Douglas Engelbart, who worked the most on the human side of intellectual augmentation perhaps chose his idealistic approach and skipped some very human points:

  • to focus on improvement, we first should accept that we were wrong. Today the first question is who to blame, so it is better not asking “bad” (that is: real) questions.
  • to fix an error, we first should want to be good. Today you must make more profit than the competition, so any known error is OK until one takes the risk and cost of solving it.

We are a civilization with a global approach of “better be dead than look bad”, and it can easily be our fate, if we follow people like…

Facebook’s first president, Sean Parker, has been sharply critical of the social network, accusing it of exploiting human “vulnerability.”God only knows what it’s doing to our children’s brains,” he said. His comments are part of a wave of tech figures expressing disillusionment and concern about the products they helped build.

Business Insider, 2017 [10]

They don’t have to know or consider while making their fortunes, that this effect was precisely predicted by many scientists like Neil Postman, and the exploitation is evident from the results of Albert-Laszlo Barabasi. But we can also choose to reconstruct and continue the still available heritage of true researchers, who gave us predictions like (quotes were highlighted by the author):

„… the “system” of man’s development and use of knowledge is regenerative. If a strong effort is made to improve that system, then the early results will facilitate subsequent phases of the effort, and so on, progressively, in an exponential crescendo. On the other hand, if intellectual processes and their technological bases are neglected, then goals that could have been achieved will remain remote, and proponents of their achievement will find it difficult to disprove charges of irresponsibility and autism.”

Libraries of the Future, 1964 [3]

References

[1] D. C. Engelbart, “Augmenting Human Intellect: A Conceptual Framework,” 1962. [Accessed 29 11 2017].

[2] V. Bush, “As We May Think” 1945.

[3] J. Licklider, Libraries of the Future, Cambridge, Massachusetts: MIT Press, 1965.

[4] D. C. Engelbart, “The Mother of All Demos,” 1968.

[5] D. C. Engelbart, “Boosting Collective IQ”

[6] T. H. Nelson, Computer Lib: You Can and Must Understand Computers Now, 1974.

[7] T. H. Nelson, “Interview with Max Allen of CBC” 1979.

[8] N. Postman, Technopoly: The Surrender of Culture to Technology, Vintage, 1993.

[9] N. Postman, “My Graduation Speech”

[10] Sean Parker about Facebook, 2017.

--

--

Lorand Kedves

Born in 1973, started both programming and digging into philosophy around 12. Works on ”the real AI” since 2000. For more, see http://bit.ly/lkedves_Disclaimer