Artificial intelligence and culture — do we need computer whisperers?

Ivana Uspenski
Computable Culture
Published in
7 min readDec 5, 2014

--

Artificial intelligence and culture — do we need computer whisperers?

Round about this time last year, I was preparing a paper presentation for an Artificial Intelligence conference. As an academic researcher, this ought to have gone according to the usual drill. I had presented in various conferences before and have done so since. Nevertheless, back then I was quite nervous and anxious. It was the first time that my proposal had barely made it through the selection process, having been seriously questioned by the review committee.

The night before the conference, I was making the necessary changes required by the reviewers to what I was about to present. I was trying to “tone my paper down” after having been reviewed as a ‘technological determinist’, coming from the perspective of the imaginary and unsubstantiated world of paranoid science fiction. My paper dealt with artificial intelligence in culture and linguistics, but to be honest most of it pointed out the less evident dangers of artificial intelligence’s speedy development. Artificial intelligence should be all about humanity’s improvement and benefit. Right?

Nevertheless, I did tone my paper down, presenting a more picturesque version of it, with light references to some of the more radical points that I wanted to make. The presentation went quite well, even stimulating some moderate discussion. I even considered that maybe I was radical in my conclusions, but ever since the conference closed, I have not given the entire topic much thought. That is, until recently.

Three things made me stop and reconsider. The first was a statement made by Elon Musk not so long ago, warning that artificial intelligence, if we are not really careful, might wipe out the whole of humanity. The second was a book written by James Barrat named Our Final Invention, which I read in one go, spending more than one sleepless night thinking it over. The third was a very recent article on BBC News about Stephen Hawking’s similar warning that artificial intelligence could end mankind. It was clear. Yes, artificial intelligence can and will make our lives easier, and in so many ways it is already doing so, but it is also a very dangerous toy to play with. If we are not careful, it might just be the last invention we ever devise.

The above mentioned sources encouraged me to rethink that ‘deterministic’ paper of mine, and after having given it one more quick read I have to say that I do not regard it as a piece of science fiction any more than I did before. I think that it deals with a point we have already reached in our cultural development and that maintaining our (humankind’s) autonomy as dominant producers and consumers of cultural artefacts and texts is at stake.

Most of the sceptical considerations regarding artificial intelligence come from scientists and engineers, dealing with problems and issues of artificial intelligence from a deeply scientific, technical and engineering point of view. This is quite understandable, as these are precisely the practices from which superior artificial intelligence is expected to emerge. On the other hand, there is very little, or almost no, discussion regarding artificial intelligence and its impact on language, communication and culture, fields in which I can go as far as to say it has already taken its toll. Even though discussions in these disciplines are far more speculative, and also informed by far less technical expertise, these are precisely the fields I find most intriguing.

Here, I will need to go back to the mid-20th century and a Swiss theorist named Ferdinand de Saussure, a highly influential linguist who came up with the concept of langue. Langue was his name for the phenomenon of human language in general, the most superior and complex communication system. Whatever we think, whatever message we try to convey, takes the shape of language. It is the metasystem of all communication. At least it was, until the birth of the computer and especially since the invention of the Internet in the 90s, which finally brought computers together and enabled them to communicate easily and efficiently.

We were so impressed with these new gadgets of ours and how efficient and easy it was to email rather than to write letters, chatting in chat rooms rather than meeting in person, sending our photos as attachments to our friends and relatives living so many miles away! We were so mesmerized, that by now we are simply hooked. So much so that we are failing to notice one important thing: we have become dependent on this technology. What also tends to escape us is the fact that in all of this computer-facilitated communication, everything we write, say or send gets (1) stored and saved, and (2) translated into binary language in order for it to be computable, understood and shared by computers.

What does this really mean? If I want to send a quick note to my friend from my iPhone saying that I will be a couple of minutes late for our meeting, even though I type it in my own language and my friend receives it in the very same form, what actually happens is that what I write gets translated within the weird system of transistors in my tiny iPhone into sets of ones and zeroes. So, the first to read my note is not really my angry, impatient friend but actually my little iPhone. Not only that, this little device is free to decide what it wants to do with this note of mine, depending on who knows how many secret processes are running within it, from numerous apps, programmes and crawlers. Read it, store it, share it with other devices, analyse it to offer me my next purchase option or simply understand where I will be at a certain point in time, what I will do and who I will meet. Who knows when and to whom this information can become useful. Remembering such data is not actually a big problem for my iPhone. Unlike people, the machines have the option to choose never to forget.

If we go back to my poor friend waiting impatiently for me, in order for her to be able to read my message at all, she would need an adequate piece of software and an interface to translate this computer-binary language that my iPhone had produced in a zillionth of a second back into our own human langue.

The fact that in the last two decades or so we have been facilitating and speeding up our communication this way, ‘translating’ whenever possible all texts and communication into digital code for easier archiving and dissemination, has actually silently and discreetly set this code as the dominant code of our culture. If one wants to survive in today’s communication economy, one needs to rely on computers as communication mediators. We need to address the computers in their own binary language (luckily, we have software doing the translation for us, here) if we want any of our communication to be done at all. Just imagine, however, if there was no software or interface out there to help us? Imagine trying to change the tiny transistors within that iPhone to an on/off status yourself, trying to guess the right order of zeroes and ones necessary to send your email message. Even if it were somehow physically manageable, it would probably take you more than a day to come up with a single sentence — and only then if you are highly proficient in coding and engineering. Computers’ processors conduct this action in just a fraction of a second, and they will only get faster.

Digital code, on which all software runs, has unquestionably become a new alphabet on which this new language, understood as langue, is based. This common digital language, or digilangue in de Saussure’s words, enables computers to receive information, compute it and, with the help of software and an interface, present it to people. At the same time, however, digilangue enables machines to easily communicate between themselves — and this part of the communication, without software to act as translator and interface, escapes us. It is beyond us. What I tend to stress is that it is precisely on this level that artificial intelligence might come up with a plan that we humans might not like so much. We will not be able to tell if it is happening. The computers will be whispering it between themselves in a language that we will not be able to pick up. It will be the part of the algorithm not written by us, happening somewhere within the ‘black box’ of a computer’s programming code.

Still, with the same risk of being labelled a ‘determinist’ and considered to be wandering in the realm of paranoid science fiction, I would conclude that by allowing the machines to read our data, memories, history, everything, we have allowed our human language — the dominant metalanguage of culture — to be read. By increasingly having to digitize all our texts, we are abandoning the unquestionable position of humankind’s superiority in the cultural ‘food chain’. This is not even to mention placing the computers in greater command of more data and information than humanity has ever had. So much so that even the relatively simple algorithms used nowadays are based only on big data calculations, and are able to simulate or even replace almost all aspects of human activity. Even writing novels, writing music, drawing paintings… Even creative work, which is still mostly considered to be one of the last refuges of autonomous human intelligence (just consider Jonas Lund’s work on algorithmic art in the photo above this text).

There is much more to say here, much more that I had tried to say in that paper of mine before I had to edit it. Even so, it will be published soon enough and maybe some other ‘technological determinists’ out there will recognize the issues and advance the topic more elaborately and knowledgeably than I have managed to do. For that, I keep my fingers crossed.

--

--

Ivana Uspenski
Computable Culture

Media theorist, futurist, overzealous thinker. Offended if called a technological determinist.