Anthropology & Artificial General Intelligence

Sunil Manghani
Electronic Life
Published in
8 min readSep 24, 2022

--

The Other Side of the Moon, Cover

During a recent stay in Tokyo, I was grateful to be alerted to a fascinating collection of essays on the subject of Japan, The Other Side of the Moon, by the revered anthropologist Claude Lévi-Strauss (1908–2009). The book’s running thread is a careful navigation of what is ‘special’ about Japanese culture, while also staying true to Lévi-Strauss’ structuralist principle of locating invariance across worldly mythological thinking. The topic might seem someway off from a consideration of contemporary artificial intelligence, but there are two rather neat passages worth considering. First, however, let us consider some current ‘myths’ of AI.

It seems everyday there are new media stories relating to AI and big data, whether introducing new technologies and techniques, or debating the societal impacts and ethics. Chosen somewhat at random, I can recount three articles I came across in my newsfeed one morning. The articles offered the following titles, which ran in succession, so to the glancing eye appeared almost as a single paragraph: firstly, ‘Google Docs will now practically do your writing for you’; followed by ‘A Hybrid AI Just Beat Eight World Champions at Bridge – and Explained How It Did It’; and finally, and perhaps most strikingly, ‘The subtle art of language: why artificial general intelligence might be impossible’.

Here, in a nutshell, we have the fears, the aspirations, the possibilities and the seemingly insurmountable aspects of AI. In short, they are part of a enduring contemporary mythology (imagine what Roland Barthes might have had to say on the subject!). In the case of Google Docs, the click-bate headline led to a more modest, but nonetheless intriguing claim, drawn from a Google blogpost, that ‘the company is adding a number of new “assistive writing features” to the word processing software, including synonym and sentence structure suggestions’. The fact that Google are engaging increasingly with stylistics is not without significance. The new service will apparently flag any ‘inappropriate’ language (so putting in doubt, for example, the opening gesture of Barthes’ Zero Degree Writing, which opens on the formal value of expletives), and also looks for ‘instances in which the writer would be better served by using the active rather than passive voice’ (although it’s not clear if the reporter deliberately uses the passive voice here, and/or chose not to use Google Docs). There is also further use of autocomplete suggestions, and, we are told, capability in ‘summarizing the most salient information in any document, eliminating the need to wade through lengthy reports’. I must admit, this latter function did spark my interest (maybe out of desperation, with so much reading accumulating, across numerous formats and platforms). In each case, the byword here is ‘productivity’; improving quality and reducing time.

The second article on AI beating eight world champion bridge players is perhaps more interesting, if less ‘useful’. Unlike chess, which ‘fell to number-crunching supercomputers long ago’, bridge is thought to be more complex, being ‘a game of incomplete information, cooperation, and sly communication’. For the tech-minded, a key point of this article is the ‘hybrid’ nature of the AI employed, which essentially, in this case, is to bring together Symbolic AI (whereby ‘software engineers hard code the rules the AI needs to know to succeed’), which enables AI to play a game successfully, but not necessarily win, and Deep Learning, which, as seen with AlphaGo beating a Go Master, enables a computer to master a game, yet has the draw back of being based upon an unfathomable artificial neural network. Ironically, the AI ‘master’ – being a black box – can be very difficult to learn from. In the case of the bridge playing software, NooK, the advantage is that it first learns the rules of the game, so using ‘background knowledge much in the way that we augment our own learning with information from books and previous experience’, which then make it easier to look back over the coding that arises from Deep Learning. Seemingly without irony, the makers refer to this as a ‘white box’. However, despite the software performing to a high level, there is an important caveat. The article ends by explaining the demonstration of the software avoided the bidding process and placed all players in the declarer role, so removing ‘challenging and nuanced parts of the game in which partners must communicate with each other and deceive their opponents’.

The shortfall of the bridge playing AI leads to the final article on the complexity of language. The byline of the article reads: ‘Until robots understand jokes and sarcasm, artificial general intelligence will remain in the realm of science fiction’. The article is concerned with the pursuit in AI for developing (artificial?) consciousness, which is referred to as ‘artificial general intelligence’. Work in this area covers many things, but seems to be united by one: to date it has failed. Referencing a survey of 72 globally independent research projects, not a single one has ‘produced conscious robots. Rather, as it stands, we have super-intelligent AI that, on the whole, is very narrow in its abilities’. We are reminded, again and again on what AI cannot do. At the heart of this problem is the abstractness, malleability and creativity of language (cf. Pinker, Words and Rules). The article concludes: ‘Our ability to think abstractly and creatively … is quite challenging to understand. And it is impossible to code for something we don’t understand. That is why novels and poems written by AI fail to create a coherent plot or are mostly nonsensical’. If artificial general intelligence is at all possible in the future, so the article contends, it will require ‘a full and comprehensive understanding of language and its countless nuances’, to which we might add, not just an understanding but a practice of language (unless, of course, we look elsewhere to define ‘intelligence’; but then it might just be a matter of being lost in translation).

With these AI myths in mind, let us return to Claude Lévi-Strauss’ book of essays, The Other Side of the Moon (2013). What if the above news articles represent (in brief) a form of anthropology – the idea being that AI is a pursuit less of something artificial, and more an enquiry into human culture and intelligence? Lévi-Strauss, in his opening essay, setting out both practical and theoretical reasons why he is not well placed to speak on Japanese culture (despite doing so anyway!), he glosses the anthropologist’s dilemma in ‘knowing’ a foreign culture:

For someone who was not born there, who did not grow up there, who was not educated there, a residue containing the most intimate essence of the culture will always remain inaccessible, even for someone who has mastered the language and all the other external means of approaching it. For cultures are by nature incommensurable. All the criteria we could use to characterize one of them come either from it, and are therefore lacking in objectivity, or from a different culture, and are by that very fact invalid. To make a valid judgment about the place of Japanese culture (or of any other culture) in the world, it would have to be possible to escape the magnetic attraction of every culture. Only on that unrealizable condition could we be assured that the judgment is not dependent either on the culture being examined or on the observer’s culture, from which the observer cannot consciously or unconsciously detach himself.

Is there any way out of this dilemma? Anthropology, by its very existence, believes there is, since all its work consists of describing and analyzing cultures chosen from among those most different from the observer’s, and of interpreting them in a language that, without misconstruing the originality and irreducibility of these cultures, nevertheless allows the reader to approach them. But on what conditions and at what cost?

Claude Lévi-Strauss, The Other Side of the Moon, pp.3–4.

The eloquent phrase ‘the magnetic attraction of every culture’ (and the idea of the need to ‘escape’ it to gain objectivity) might begin to evoke something of AI’s ability to position itself from afar, at least that in order for AI to be successful, it cannot just render a model of human intelligence, it needs to make it work practically. We might think of Flaubert’s characters, Bouvard and Pécuchet, who tirelessly seek out an understanding of all manner of knowledge, only to find they largely bungle any practical pursuit. As one critic puts it, ‘if Flaubert has satirical intentions towards them, it is not because they are intellectually mediocre, but because they would put knowledge to use’ (Leo Bersani, ‘Flaubert’s Encyclopedism’, in Novel: A Forum on Fiction, 1988, Vol. 21, No. 2/3, pp.140–146). Lévi-Strauss’ response to this immediate problem of speaking on Japanese culture is to turn to his love of music in general, and specifically his new appreciation of Japanese musical composition. Noting his inability to understand the ‘marked differences of era, genre, and style’, but also remarking on the more general difficulty of fully knowing the sound of music from the distant past, he draws out the positive gains from a fragmentary state of knowledge, in short from ‘error’. He does so by way of a vivid analogy to astronomy:

…it may also be that the inevitably fragmented state of knowledge of someone contemplating a culture from the outside, the gross errors in assessment he is liable to make, have their compensation. Condemned to look at things only from afar, incapable of perceiving their details, it may be owing to the anthropologist’s inadequacies that he is rendered sensitive to invariant characteristics that have persisted or become more prominent in several realms of the culture, and which are obscured by the very differences that escape him. In that respect, anthropology can be compared to astronomy at its very beginnings. Our ancestors contemplated the night sky without the use of telescopes and with no knowledge of cosmology. Under the names of constellations, they distinguished groups lacking all physical reality, each formed of stars that the eyes perceive as being on the same plane, even though they are located at fantastically different distances from earth. The error can be explained by the distance between the observer and the objects of observation. It is thanks to that error, however, that regularities in the apparent movement of the celestial bodies were identified early on. For millennia, human beings used them – and continue to use them – to predict the return of the seasons, to measure the passage of time at night, and to serve as guides on the oceans. Let us refrain from asking more from anthropology; however, in the absence of ever knowing a culture from the inside, a privilege reserved for natives, anthropology can at least propose an overall view – one reduced to a few schematic outlines, but which those indigenous to the culture would be incapable of attaining because they are located too close to it.

Claude Lévi-Strauss, The Other Side of the Moon, pp.6–7.

The fuller debates of anthropology’s positionality (and anthropology’s ‘guilt’) go beyond the purview of this entry, but to remain on the terrain of analogy, our astronomical ‘reality’ (our ability to only see upon a single plane what is multiple) offers perhaps a nice analogy of our current foray into artificial intelligence. At the moment, plotting ‘schematic outlines’ across only a myriad of ones and zeros, our current ‘errors’ are nonetheless leading to significant regularities (for good or bad). The analogy is equally pertinent for our investigations into the brain. While the universe may have some 200 billion trillion stars, the brain nonetheless has up to 100 billion neurons ‘of many species’ and in ‘many fantastic shapes’, and (as we are just starting to understand), it is the complexity of their interactions (the fact that neurons do not just ‘compute’ but also communicate, or signal) that makes for such vastness (Sebastian Seung, Connectome). Artificial general intelligence is indeed a long, long way off, but its enquiry potentially takes us through the human terrain in ways we might never have quite seen for ourselves.

--

--

Sunil Manghani
Electronic Life

Professor of Theory, Practice & Critique at University of Southampton, Fellow of Alan Turing Institute for AI, and managing editor of Theory, Culture & Society.