Artifice Intelligence

Emma Beauxis-Aussalet
digitalsocietyschool
5 min readFeb 27, 2019

On the Limitations of AI

Photo: E. Beauxis-Aussalet

Giving intelligence, even life, to inanimate matter is an old dream of humanity. Ancient mythologies tell of statues changed into humans: Galatea or Pandora in Greece, Ilmarinen’s bride of gold in Finland [1]. Less romantic myths include the mechanical creatures of Hephaestus, the Greek god of blacksmiths [2, 3], or the Golems in the Talmud and Eastern-European folklores [4].

However informative of our fantasies and fate, these myths are nowhere close to describing today’s Artificial Intelligence (AI). Except the Golems perhaps.

Today’s AI is not intelligent per se. It functions either with static sets of rules (symbolic or model-driven AI), or with repetitive statistics and computations (machine learning or data-driven AI). Whether intelligence can be achieved with either rules or computations, or both together, is not our topic here. We rather focus on two characteristics that impede them both: their static and repetitive nature. Can intelligence be achieved with static and repetitive representations of the world?

The numbers that AI compute are not static, e.g., they evolve as measurements of real-world situations are gathered over time. The conditions that trigger machines to execute man-made rules and prescribed actions are not static either. However, the frameworks that execute them is fundamental static. The sets of variables are static, although the variables’ values may change. The man-made rules that can be executed are static. The set of conditions that can trigger the rules are predefined. Although the status of the conditions is dynamic, and can reflect specific circumstances, the set of potential conditions is finite. In essence, the set of metrics that AI uses to represent the world, and the possibilities within which their computerised models can evolve, are static.

The lines of codes in computer programs, i.e., the framework that updates the metrics values or the status of computerised models, and executes prescribed actions as a form of intelligent response to given situations, are static too and are thus applied repetitively. Can intelligence be achieved within such static and repetitive frameworks? What is not reflected in such framework, and can eventually yield unintelligent responses? AI systems are like the prisoners in Plato’s cave [5], or the victims of Margritte’s treachery of images [6].

For sanity checks, computer scientists can rely on Georges Box’s maxim: “All models are wrong, but some are useful” [7, 8]. Models do not represent the whole truth, but valuable parts of the truth. The closer models would approach the truth, the more complex they would need to be, e.g., encoding the subtleties of reality would require countless parameters. William of Occam posited that omitting part of the world’s complexity is what confers strength to our models. Simpler models are easier to apply in practice. More importantly, preferring simpler models leads to laying down fundamental principles that can be tested in a larger range of cases. Principles that are more fundamental thus applicable, and more testable thus reliable, provide us with highly valuable, yet partial, truths.

Georges Box acknowledged the value and limits of preferring simpler models: “The practical question is how wrong do [models] have to be to not be useful” [7, 8]. There are cases for which models are wrong, and AI systems unintelligent. Our AI models are partial representations of the truth. The information they contain may be true, it remains incomplete. Using partial representations confers AI a certain range of applications …and limitations too. AI’s intelligence is both enabled and limited to the measure of their models’ simplifications. Acknowledging the extent of AI’s limitations is a key condition to apply AI in cases for which it provides the intelligent responses.

Photo: E. Beauxis-Aussalet

AI is made of artifices and tactics. Their stratagems may be clever, but their nature remains that of forgery, a mimicry of intelligence. AI systems fall short of emulating reality, or human intelligence. Their data is partial and incomplete, their programs are static and repetitive. Overall, the concepts and behaviours they integrate are static, repetitive, partial and incomplete. AI systems can be adequate for most situations they are intended for, and inadequate for situations they are not equipped to address. Hence we can adapt Georges Box’s maxim: All AI systems are wrong, but some are useful. How wrong do they have to be not to be useful? For now, AI systems are like Golems: obedient, and as right or wrong as the instructions they are given by their human creators.

We, users of AI systems, should develop our intelligence on AI limitations. We need to acquire an understanding of AI artifices and mimicries. We need an artifice intelligence to use artificial intelligence wisely. Perhaps then, after developing our own artifice intelligence, we would be able to encode its basic principles in our artificial machines. Perhaps then, AI systems could acquire the intelligence of their own artifices. This could confer a form of self-awareness to AI systems.

When do AI systems lack the variables for encoding the right information? When do AI systems lack the instructions (rules or lines of codes) to act intelligently? And what should these variables and instructions be? These are, for a start, basic questions we need to ask to develop our artifice intelligence. AI systems could, one day, run such analysis of themselves. Yet, it would remain very challenging for AI systems to acquire new variables and instructions, and apply them meaningfully. On the opposite, this process is more natural for humans: our mental models are flexible, and integrate an awareness of the unknown.

After all, humans even consider intelligence itself as unknown. No universal definition of intelligence, no universal test of intelligence, are available. This is perhaps due to the plural nature of intelligence. This lack of a definition for intelligence reflects the flexibility of our understanding of the world. By leaving our mental concepts partly undefined, we allow space for more accurate understanding of the world to emerge. We can consider this process as the construction of “justified beliefs”, a core concern of epistemology, and a practical framework for developing intelligence.

William of Occam recommended that “Plurality is not to be posited without necessity” [7]. Perhaps, for developing intelligence, plurality is a necessity. Intelligence may rely on embracing the plurality of the models that are potentially relevant for specific situations, but not relevant for all situations.

Photo: E. Beauxis-Aussalet

[1] https://en.wikipedia.org/wiki/Ilmarinen#Ilmarinen's_Bride_of_Gold
[2] https://en.wikipedia.org/wiki/Hephaestus
[3] https://en.wikipedia.org/wiki/Talos
[4] https://en.wikipedia.org/wiki/Golem
[5] https://en.wikipedia.org/wiki/Allegory_of_the_Cave
[6] https://en.wikipedia.org/wiki/The_Treachery_of_Images
[7] https://en.wikipedia.org/wiki/Occam%27s_razor
[8] Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfaces, John Wiley & Sons.
[9] Alan Baker (2010) [2004]. “Simplicity”. Stanford Encyclopedia of Philosophy. California: Stanford University. ISSN 1095–5054.

The Digital Society School is a growing community of learners, creators and designers who create meaningful impact on society and its global digital transformation. Check us out at digitalsocietyschool.org.

--

--