How to Humanize AI with Abstraction
Only the Social Singularity Will Enlighten the Darkness

“The most important problem for AI today is abstraction and reasoning,” — Francois Chollet, AI researcher at Google, quoted in Understanding the limits of deep learning, April 2nd, 2017
Turing-test passing Artificial Intelligence (AI) is already here in various forms, and will continue to evolve and be a boon for the tech sector, but we still have a long way to go. The AI Roadmap Institute outlines 29 Unsolved Problems in AI, many of which could be addressed with better abstraction. AI is simultaneously being developed for purely profit-squeezing exploitative purposes as well as weaponized for war and law enforcement (this has already been the case since the introduction of the ignominious oxymoronic “smart bombs”). A chorus of hype is drowning out the few critical voices that can help shape the peaceful manifestation of AI and develop its commercial educational forms.
That’s where we — The Abs-Tract Organization — come in. Abstraction is the central concept in computer science. I generally define abstraction as “a conceptual process of complexity reduction that highlights the essential properties or first principles of a given object or idea,” while in computer science it takes more specific forms and refers to the nesting and ordering of information. Programmers will not shut up about it, but virtually all of that conversation is bounded by its own technical terminology and therefore cut off from philosophical abstraction. The core principle is the same, but there needs to be more consilience between abstraction in AI and the abstraction of critical thinking. We must bridge the conversation about abstraction between computer science and philosophy in order to humanize AI. Self-driving cars will save millions of dollars and lives, but do we know where were going?
“I want to suggest that if a real artificial intelligence (AI) is going to be built, sociologists will have to play a major part in it.” — Randal Collins, Sociological Insight, 1992
The AI industry should emphasize and integrate existing knowledge from sociology in order to know what to teach and program AI to think and do. Simultaneously, programmers and policy makers alike need to learn how to abstract different types of content, not merely their own discourse, in order to come to the right consensus. As I’ve argued in The New Reproach of Abstraction, this process is blocked and forestalled by a rebuking of complexity thinking, which extends to reproaching sociology as well, and takes various anti-intellectual forms.
How can we create artificial intelligence, if we haven’t even mastered intelligence? As we’ve already seen with Microsoft’s Hitler-loving chatbot Tay, if AI takes its cues from public discourse, its going to be evil. It needs to somehow be smarter than us, so its smart enough not to destroy us, as humans are prone to do. Imagine a conflict resolution AI that could synthesize a debate between Noam Chomsky and Sam Harris, and to such an extent that they would both concede and find consensus. Deep Blue is feeble-minded compared to such an AI.
But we can hardly program AI to reconcile our ideological spats if we don’t even understand it ourselves. That’s why basic analogue abstraction has to be mastered first. Abstraction is not yet explicit enough in education and think tanks to catalyze the dramatic shift in perspective that it implies. The above quotes by Francois Chollet and Randal Collins inform where the necessary innovation lies (particularly with abstraction and sociology), and the type of think tank The Abs-Tract Organization strives to be; one dedicated to understanding abstraction as a varied but universal cognitive problem-solving process to help humanize AI and solve all social problems abstractly. The flow diagram below is a simplification of the higher efficiency of abstract problem solving.

Abstraction in Computer Science
“In AP Computer Science Principles, I learned the importance of “abstraction”, meaning to break a challenge into manageable pieces, and have applied this concept into various aspects of my studies and my life outside of the classroom.” — Adrian Avalos, from National City, California, [Source: the (Obama) White House blog]
When taught well, the concept of abstraction is very interdisciplinary. As the above quote suggests, teaching it in computer science (CS) can enhance learning in other domains. More than any other discipline, CS dominates the science of abstraction, so its important to reverse engineer what we can back into philosophy and critical thinking. This is the essence of cross-training. Here’s a 27-lecture series on Programming Abstractions from Stanford to get you started. The more serious exports are when you break down ‘abstraction’ into its concepts like heuristics, compression, ordering, simplification, mapping, etc. Some programming examples taken from the Stanford lectures include; ‘algorithms (sort/ search/ hash), dynamic data structures (lists, trees, heaps), data abstraction (stacks queues, maps)’.
The common thread between CS and philosophical abstraction is essentially ‘computational thinking,’ which can be thought of as higher-order critical thinking, although not a substitute for it. Recent literature indicates the new surge in computational thinking and the push for it to enter education curriculum. In Reflective Abstraction in Computational Thinking (2017), Cetin et al argue that abstraction is accepted as the central concept in computational thinking, but there is still disgreement over what it is. They present a case study for ‘reflective abstraction’ (from Piaget) that improved student learning outcomes in math. In that paper they consider three types of abstraction: extraction (as in math), decontextualization (contra to real-world parallels), and essence (reflective). They emphasize and advocate the abstract APOS Theory in mathematics education, short for Action, Process, Object, and Schemas, and it looks like this:

In another paper, Computational Thinking is Critical Thinking: Connecting University Discourse, Goals, and Learning Outcomes (2016) points out that abstraction is common to both computational and critical thinking. The author (Kules) essentially makes the case for bridging the two types of thinking. The clear definition of computational thinking quoted below is helpful. Similarly, the Venn diagram below demonstrates the overlap between creative and critical thinking, which programming just happens to inform. There is ‘abstraction & simplification’ right in the center.
“CompT can be defined as “a brain-based activity that enables problems to be resolved, situations better understood, and values better expressed through systematic application of abstraction, decomposition, algorithmic design, generalization, and evaluation in the production of an automation implementable by a digital or human computing device”.” (Selby & Woollard, 2014), cited in Computational Thinking is Critical Thinking

I’ll be the first to admit, “abstraction” might be impotent if you can’t appreciate the complexity of it. That’s why I’m outlining its variability and utility. It’s better, as I’ve been advocating, to think of abstraction as a toolbox and broad methodology rather than a one-dimensional concept. The whole point is that abstraction is ubiquitous and something that absolutely everyone on the planet does; its part and parcel of thinking itself. But most people do it poorly, intuitively, and take short cuts in thinking. Abstraction is explicitly a technique to zoom out to the big picture. Computer scientists, sociologists, and the public all still have a lot to learn from each other in this regard.
Then there is the “Abstraction principle” itself from computer programming: ‘a basic dictum that aims to reduce duplication of information in a program… whenever practical by making use of abstractions.’ Also known as ‘the “don’t repeat yourself” principle’. This can be useful outside computer science in obvious ways. How much redundancy is there in the work world? How much of market diversity is just superficial and derivative products and services? How many photographers and microphones do you really need at a press conference? Alas, it would be great to have one universal health care system rather than a race to the bottom of subpar providers. The principle of ‘down-sizing’ is maximized in business as a matter of policy, yet we still see so much wasteful expenditure, redundant labour, and no jobs where they are needed. The principle of ‘parsimony’ is also integral to the definition of science, yet so much research is bloated with hot air and frivolous empiricism. All failures to abtract.
This is leading up to the challenge of AI — the ‘black box’ effect, in which we are creating something we don’t understand — and the pressing need for a social singularity to understand it. To bridge the discourses, at minimum we have to highlight the common concepts and themes and use analogies to strengthen the understanding of each discipline. The MIT Tech Review argues how the problem is getting out of hand, but also hints at how abstraction can map it:
“The many layers in a deep network enable it to recognize things at different levels of abstraction.” — MIT Technology Review
Abstraction and AI
“Abstraction is not only interesting per se, but we believe it is at the basis of other forms of reasoning, for instance analogy.” — Abstraction in Artificial Intelligence and Complex Systems (2013)

I am by no means a computer scientist or an expert in AI, but I do otherwise consider myself a specialist of abstraction and sociology, so it is fortuitous that this book already exists to inform our approach: Abstraction in Artificial Intelligence and Complex Systems (2013). Saitta and Zucker devote the first 47 pages of the book to discussing abstraction in different fields before diving into its relevance for AI. It is nice to see such space dedicated to the diversity of the concept, whereas most computer science articles or blogs will dive right in with no reference to any abstraction outside the context of code. Unless you are a logician, content like this will appear as meaningless as it is scary. Ideally, programmers can work with us on complex social problems in their language.
“Abstraction is an elusive and multi-faceted concept, difficult to pin down and formalize.” — Abstraction in Artificial Intelligence and Complex Systems (2013)
The book is thick and rich in concepts, diagrams, and formulae. Pure abstraction is, after all, the condensed symbolic representation of information. I reckon this could be a sort of bible for programmers, for lack of a better word. What is important is the role abstraction plays in reasoning, and therefore everyday affairs, as suggested by the first quote of this section, and at the top of this post. This book and the AI/CS discourse activates the many vital sub-concepts of abstraction, such as generalization and approximation, so we can have a very multi-faceted approach to the abstract deconstruction of a problem, and ultimately to reason better.
Think tanks dedicated strictly to ‘reason’ are missing the mark, because reason has become beset and marred by moneyed interests and human delusion (or fallibility, if you will). In the closing sections of the book, the authors highlight “The Need for an Operational Theory of Abstraction” which is the direction our journey continues. Ultimately the deep question still unanswered is ‘what abstraction operators do we use for what tasks?’ It requires yet more layers of abstraction to solve.
“[I]f abstraction operators are to be routinely used in real-world domains, they must cope with the wealth of details that this implies.” — Abstraction in Artificial Intelligence and Complex Systems (2013)
Luckily, what little knowledge we have about educating AI can be supplemented by that we know of early childhood education, and can strengthen education in general as such. The opportunity to humanize AI is also a window to humanize ourselves and social institutions. The technological singularity is also a market opportunity for a social one. By using abstraction to map and define critical thought, we can begin to resolve old political debates at the same time we make technological breakthroughs. Tools like Root Cause Analysis (RCA), Critical Discourse Analysis (CDA), Abductive Logic Programming (ALP), and Knowledge Representation (KR) are exactly the kind of abstract methodologies that bridge this gap, and are where AI funding should go first — to The Abs-Tract Organization.
“Certainly, studying abstraction both per se and in applications is one of the most challenging directions of research in Artificial Intelligence and Complex Systems.” — Abstraction in Artificial Intelligence and Complex Systems (2013)
For what its worth, I should add that ‘artificial intelligence’ is a bit of a misnomer, for it now refers to a field far greater than what is implied simply by AI, including automation, anthropomorphization, augmented reality, and much more. Nevertheless, we are path-dependent on certain terms, until we can invent better ones (working on it). And while machine learning by definition can teach itself things, it needs a lot of guidance. When it comes to human concerns AI will only know what we teach it to learn, and will mimic what we do and say, and different forms will be customized by each user. And if we are going to teach computers to think, we need to think like computers; hence, computational thinking. AI outcomes will also depend on who can create it. Imagine a robot with a blank slate, that one could then be programmed and trained to assassinate. If it is possible, this will invariably happen. While some fantasize of robot armies, in truth, it implies the obsolescence of soldiers altogether.

The Singularity
“The use of the term “Singularity” comes from physics, where it describes a point of collapsed space-time, typically at the center of a black hole; the underlying claim is that all of our knowledge of how the universe works is irrelevant within a Singularity. This is the root of the metaphorical use of the term–after a Singularity event, everything we know will change in ways we can’t now understand.” — The Singularity and Society, 2009
Of course, the most famous proponent of the singularity is inventor Ray Kurzweil, but here is some other background knowledge to prepare us for a social singularity. In The Singularity and Society (2009), Jamais Cascio writes “There’s too little discussion of how the social, cultural and political choices we make would shape the onset or even the possibility of a Singularity.” Here is Professional skeptic Dr. Michael Shermer giving a talk on the Social Singularity: Why Things Are Getting Progressively Better (2011). Here, Blogger David Bensen describes a ‘cultural singularity’ in 2012, due to information overload and the capacity for ‘frictionless sharing.’
In sociology, the view of the singularity is split. On one hand, here is world’s most eminent sociologist — Anthony Giddens — highlighting the technological singularity (2013). On the other hand, here’s Noam Chomsky essentially saying the singularity is science fiction (2013), but to steelman Chomsky’s argument he is just saying the hype is overblown and it doesn’t solve our real problems. And here, Carl Mahoney explores the consequences of AI from a humanist perspective.
Metaphorical is the operative word from the above quote. The singularity does not have to be a specific event to be meaningful. In this sense, the singularity is inevitable but we must be able to solve social paradoxes before ‘everything changes’ and some of the worst extant social policies become grandfathered in to AI culture. There is some vague assumption the latter won’t happen, but now Bluto is in the White House, and tech magnate Peter Thiel helped him get there. Ergo, Robocop is going to kill and imprison just as many black folk if people like Jeff Sessions are interpreting the law.
The question of sentience aside, technology has evolved a life of its own and is rapidly emancipating some sectors of society while running roughshod over others. A good portion of AI funding should go to tackling these issues, especially vis-a-vis sociology and abstraction. The paper The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms (2017) suggests dramatic implications for society. The paper concludes:
“The greatest challenge facing societies and firms would be utilizing the benefits of availing AI technologies, providing vast opportunities for both new products/services and immense productivity improvements while avoiding the dangers and disadvantages in terms of increased unemployment and greater wealth inequalities.” — The forthcoming Artificial Intelligence (AI) revolution
I emphasize attending to the latter part of that sentence, regarding the dangers of unemployment and inequality. This is a social problem that the tech singularity sells itself as fixing, but does not actually address. A concrete solution, and a defining feature of a social singularity, would be Universal Basic Income, for example. This is one principle in a set of features that would define such a social singularity; a moment of equalization and harmonization. But as anyone with open eyes can see, there is far too much facile dispute and political gridlock to make sociological evolution keep pace with the technological singularity.
My short answer is always that this is due to socio-political stupidity, corruption, and generally a failure to abstract. As we can see in the following two charts, the summarization, schematization, and predictive power of abstraction is immense. The first depicts the distinct phases of technological revolution, and the current AI phase we are passing through. The second shows the proportional increase of service jobs as manufacturing and agriculture were replace by robotization. What’s present and upcoming? The obsolescence and automation of service jobs by AI, and with that, massive economic upheaval and political instability. Point being, abstract.


I would argue that the social or sociological singularity represents the practico-political implications of the paradigm shift of metamodernism, insofar as ‘everything we know will change in ways we can’t now understand.’ In this blog I attempt to articulate exactly what those changes are in terms of abstraction and metamodernism. We are speeding towards the event horizon of global inclusion made possible by the internet and technology. Everything already has changed — but not for everyone yet.
Concepts like meta-governance are in the realm of AI now, and we need investment in sociological imagination and abstraction to humanize and socialize our technocratic overlords. Policies like Universal Basic Income are absolute imperatives, not only to redress past and current income inequality, systemic inefficiencies, and bureaucratic redundancies, but to anticipate and counteract future turbulence. This is nothing less than a revolution and paradigm shift, with which a whole host of positive policies will be bundled.
Conclusion
“Write a paper promising salvation, make it a ‘structured’ something or a ‘virtual’ something, or ‘abstract’, ‘distributed’ or ‘higher-order’ or ‘applicative’ and you can almost be certain of having started a new cult.” — Dijkstra, My hopes of computing science (1979)
It is funny, the Dijkstra quote… in that I claim that the abstraction is truly emancipatory and revolutionary, especially in the manifesto/film The Abs•Tract, which valorizes abstraction while invoking and parodying the cult aesthetic. Behind the joke, I do believe that salvation lies in the science and philosophy of abstraction, and I think Dijkstra does too. He may be mocking idealist manifestos, but it doesn’t alter the fact of a need for one. And it is with this in mind, that AI developers and entrepreneurs need to level-up their abstraction.
I could go deeper into each of the topics explored here, but all this abstraction must be parsed in manageable bits, for my own sake as well as the readers. To synthesize, abstraction is the central concept in computational thinking and artificial intelligence, and yet it is something way beyond it, in philosophy, sociology, art, and other fields. The underlying principle is reason, which is nothing without humanism. The technological singularity is in part predicated on AI, so abstraction plays a key role in manifesting it, but only if it is guided by philosophical and sociological accounts of abstraction.
The current sociological crisis needs to be solved in concert with these other challenges, in the form of aiming for an explicit social singularity. We must consolidate and consult abstraction in order to make this happen, to minimize risk, and optimize progress. I invite computer scientists and AI entrepreneurs to consider the different perspectives we are providing and to contribute to our research and fundraising efforts.
“The effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer.” — Dijkstra, The Humble Programmer (1972)
“Being abstract is something profoundly different from being vague … The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” — Dijkstra
The Abs-Tract Organization (TATO) is a boutique research and media think tank, centered around the broad concept of “abstraction” and five other research streams.
If you appreciate the work we do, please support us on Patreon for $1.
Join and support our growing metamodern project at http://www.abs-tract.org and on twitter @TATO_tweets.


