AI is not a technology

Marco Gillies
Machine Learning for All
4 min readApr 24, 2018

If you’ve been following my writing on Virtual Reality you might have seen my posts about Jaron Lanier, one of the great pioneers of VR, who has recently published a new book “Dawn of the New Everything: a journey through Virtual Reality”. The book is a memoir of his work on VR and a philosophy of VR that brings in many diverse themes, including Laniers views on AI and machine learning.

There is one particular theme that is very similar to how I’ve been thinking about the idea of Artificial Intelligence, which I thought I would share.

Wrapping Paper

Lanier sums up his view like this:

Many of my colleagues think of AI as something we build, while I think of it as wrapping paper that we put on what we build

What he is saying is that what makes something “AI” is not the technology. It isn’t, for example, the machine learning algorithms. It is how we think about that technology.

Believe in the AI overlord

What Lanier means by AI, in this context, is the idea of a computer or algorithm as an autonomous, intelligent being, very much like, or superior to, a human in its capabilities.

This is very much like the AI of science fiction, where the robots are their own characters, with personalities and motivations, ranging from helpful servants to ruthless AI overlords.

This is one of our dominant views of technologies, that it will become equal to us and eventually be a superior being that will take over from us.

A Philosophy

The thing to bear in mind is that this viewpoint is very much at odds with the current state of technology. Event the most advanced deep learning algorithm is basically just a big statistical number cruncher. It just adds up numbers and applies non-linear functions to them (and works as it does because it is adding up lots and lots and lots of numbers).

So this is a view of what technology is, or what it could be. Machine learning is not innately a Robot Overlord, it is just us that think about it that way.

Less dramatically, the AI is a philosophy of technology: the view that it is is an autonomous, intelligent being. It is how we think about the technology; how we imagine it.

It is a philosophy that has social consequences, many of us have started to fear technologies that we label AI because they will “take over”. There are plenty of good reasons to fear technology because of their real world effects, but this fear is a consequence of how we think about that technology.

It is also a philosophy that has more practical consequences for the development of technology. A lot of Machine Learning researchers aim to eliminate all human input into the learning process, because they explicitly or implicitly believe it should be autonomous. This results in downplaying human tweaking that is key to machine learning, though celebrating that human input could result in massive improvements to the technology.

There are other Philosophies

If AI is just a way of thinking about technologies, so Lanier continues, then we can think about it in other ways too.

One other philosophy of technology that we commonly use is from Human-Computer Interaction: computers are tools, or part of human activity. We can think of machine learning technology, not as an autonomous entity, but as something more like a smart phone, that is a tool that we use in our everyday activities.

This encourages us to have a focus, not only on the algorithms, but the user interface. As Lanier says:

… if there’s a goal aside from the fantasy [of an AI being], such as, say, to make analyzing medical records much more efficient, then I always argue that we should try to separate out the algorithms relevant to the task — analyzing medical records — and see if we can design a user interface the results as clear as possible without bringing in imaginary beings. In my experience, when we do that, it’s slow, hard work, but results often improve.

So we can think of almost any technology in different philosophical terms. We don’t have to think of a smart phone as a tool, we could think of it as an “itelligent” assistant (a terms that is often used). I’ve always thought it is interesting to think of the iPhone as an example of Alan Turing’s “Universal Machine”. But importantly, we can think of a technology like machine learning as a tool that integrates with our human processes.

Philosophies have practical implications

What is more the way we think about a technology has an important effect on how we develop it. If we think of machine learning as an autonomous being, we will try to eliminate all human input in the process. If, on the other hand, we take an HCI philosophy, we will celebrate the human input and start designing interfaces to make interacting with machine learning easier.

The second apporach is likely to result in significant improvements be cause we are making more use of our human capabilities to help the technology. Not only that, more human involvement might mitigate some of the risks of fully autonomous AI. For example, I recently wrote about how Mark Hammond has argued that being more explicit about machine learning can help make the results of machine learning more understandable.

What is more, this approch is more honest. Lucy Suchman has argued in her book “Human Machine Reconfigurations” that human input is vital to what is made to appear as autonomous behaviour in technology and that the feeling of autonomy is created by the way humans interact with computers.

If human guidance is essential to a technology like machine learning, let’s acknowledge it and make interfaces to improve that guidance. That is, in essence a core idea of what we call “Human-Centered Machine Learning”.

--

--

Marco Gillies
Machine Learning for All

Virtual Reality and AI researcher and educator at Goldsmiths, University of London and co-developer of the VR and ML for ALL MOOCs on Coursera.