James Bridle: Machine Learning in Practice

James Bridle is a writer, technologist and artist now based in Athens. We asked James to tell us about his recent work, Cloud Index, and how it relates to the history and impactions of intelligent machines…

Cloud Index (2016), James Bridle

There’s an old story about machine learning that’s always amusing to tell. In fact, it’s been retold so many times in the literature of the subject, in so many ways, that it’s rarely attributed — it is, in all probability, apocryphal. It’s a myth, but then myths have always been how we tell ourselves grand stories about reality, and this story is no less useful for being one. It goes like this.

Once upon a time, the US army decided to develop a computer system for detecting camouflaged tanks. They built a neural network — a kind of artificial brain — and trained it on hundreds of photos of tanks hidden among trees, and hundreds of photos of trees without any tanks, until it could tell the difference between the two types of pictures. And they saved another few hundred images which the network hadn’t seen, in order to test it. When they showed it the second set of images, it performed perfectly: correctly separating the pictures with tanks in them from the ones without tanks. So the researchers sent in their network — and the army sent it straight back, claiming it was useless.

Upon further investigation, it turned out that the soldiers taking the photos had only had a tank to camouflage for a couple of days, when the weather had been great. After the tank was returned, the weather changed, and all the photos without a tank in them were taken under cloudy skies. As a result, the network had not learned to discriminate between tanks, but between weather conditions: it was very good at deciding if the weather in the photograph was sunny or overcast, but not much else. The moral of the story is that machines are great at learning; it’s just very hard to know what it is that they’ve learned.

Now, one of the many simplifications of this story is that a neural network isn’t really like a brain at all. There’s nothing really like the brain, and that’s because, and perhaps why, we don’t really understand very much about the brain. The field of what used to be called Artificial Intelligence has always been hampered by its attempts to recreate human intelligence, which turns out to be a very slippery thing indeed. But in recent years, there’s been a huge growth in the use of machine learning, and that growth has been driven by the increasing inhumanity of the intelligence we’re developing.

Take the recent and much celebrated contest between the Korean Go professional Lee Sedol, one of the highest rated players in the world, and the computer program AlphaGo. In the second of five games, AlphaGo played a move which stunned Sedol and the spectators alike, placing one of its stones on the far side of the board, and seeming to abandon the battle in progress. “That’s a very strange move,” said one commentator. “I thought it was a mistake,” said the other. Fan Hui, another professional Go player who had been the first professional to lose to the machine six months earlier, said of it: “It’s not a human move. I’ve never seen a human play this move.” And he added: “So beautiful.” In the history of the 2,500 year old game, nobody had ever played like this. AlphaGo went on to win the game, and the series.

AlphaGo was developed by feeding it millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them. The sophistication of the moves that must have been played in those games between the shards of AlphaGo is beyond imagination, too, but we are unlikely to ever see and appreciate them: there’s no way to quantify sophistication, only winning instinct. (The late and much lamented Iain M Banks, in his Culture novels, called the place where these moves occurred “Infinite Fun Space”, the realm of metamathematical possibility, accessible only to superhuman artificial intelligences.)

It’s been obvious for some time that, in the words of the engineer and investor Marc Andreessen, software is eating the world, in the privileged parts of the Global North at least. And this software is increasingly autonomous and intelligent, with everything from mortgage applications to drone kill lists being decided by trained networks. Machine learning is revolutionising the development of self-driving cars and financial strategies, and making big — if dubious — claims to creative activities such as music composition and language translation. It’s striking, and a little troubling, that this imminent ubiquity is paralleled by an ever-greater opacity. The more powerful our tools become, the less we understand them.

In my work Cloud Index (2016), I set out to explore some of the history and implications of machinic intelligence — that is, the intelligence of machines, and the intelligence of humans who think we can think like machines. Taking almost 10 years of atmospheric cloud data, and 10 years of polling data, I developed a neural network for predicting the weather based on political events — or perhaps, vice versa. By using such forecasts to direct weather modification technologies such as cloud seeding and solar radiation management, we might redirect political will.

While the work appears to take at face value the claims of big data and machine learning — that all inputs, suitably quantified, might be processed to predict and thus control future outcomes — the result that it points to is an uncertain one. The cloud is cloudy, and no amount of impenetrable computation will render it clearly. This is the nature of the world, which our most advanced technologies are not simplifying, but depicting ever more clearly, if only we would admit to the reality of their vision. The world in its totality is not something to be understood and controlled, but to be experienced in all its violent, beautiful complexity; to be struggled and reckoned with, but never to be mastered.

Seen in this light, machine intelligence is not a way of intervening in the world, but of learning from it: a new mode of storytelling and story-making, a new kind of reckoning. In order to live in uncertain times — and all times are uncertain — we need more stories that make sense of how to live.

Perhaps we need more stories like the tank one. Perhaps we need new myths.

Commissioned by Lucy Sollitt, on behalf of British Council, edited by Eleanor Turney

--

--

British Council Creative Economy
Intersections: Art and Digital Creativity in the UK

British Council Creative Economy team. We work with artists, entrepreneurs, and creative communities globally to tackle today’s cultural and social challenges.