Making New Minds That Love Trees

Ian Ingram
11 min readDec 11, 2018

--

August 25, 2018

Whither the trees?
We pointed a camera into the landscape of arctic Finland — full of lichen-covered rocks and twisted birch trees — and asked an AI to tell us what it saw there. It told us it saw snow-mobiles.

There were none. In fact, while the human hand was probably manifest in that landscape in ways we could not perceive at a glance, there were no salient human artifacts in the AI’s view. It was hallucinating. It was hallucinating a landscape full of snow-mobiles. Perhaps more strikingly, it didn’t see the trees.

“We” were Theun Karelse, Antti Tenetz, and myself, up at the Kilpisjärvi Biological Research station as part of the Ars Bioarctica artist residency and the Random Forests project that Theun had initiated. The tree-blind “AI” was the Inception Version 3 image classifier that ships with Google’s Tensorflow machine learning framework. It knows about one thousand things out of the 20000 in the ImageNet database. These range from the banal — a plastic bag — to the unlikely — a pickelhaube — to things whose inclusion is perhaps a tad disturbing — a guillotine.

Inception also knows about a lot of animals: the nudibranch, the eft, the mongoose, and the rhinoceros beetle to name a few. In fact, it knows 398 kinds of animal i.e. animals comprise just under 40% of the things it has been trained to detect. That is why in recent projects I have been using it in the perception systems of my robots for which the presence of particular animals is often key. Instead of building my own image classifiers as I had been doing since the late 2000s, I have been retraining the final layer of Inception’s convolutional neural network to detect the particular animals my robots are interested in. Tensorflow has made this easy.

Inception even knows over a hundred dog breeds, the breed being a category of animal that very much shows the human hand at work, giving it a certain kinship with the aforementioned snowmobile and making it very useful for my robot that warns squirrels of incoming predators using their own tail flick alarm signal. I have become used to pointing Inception — at the beginning of a project, before retraining it — at some animal and having it come back with a name that, if not spot-on, certainly showed it was getting the gist, telling me “hamster” when it was looking at a rat, telling me “grouse” when it was looking at a pigeon. But, surprisingly when Inception looked out onto a landscape full of birches, it did not say “aspen,” or “willow,” or even “oak.” The trees were, to the last one, invisible to it.

From a technical, proximate perspective, this became less surprising when we had Inception spit out a list of the things it did know about and noted that none indeed were trees. Taking a few steps back, however, that trees were neglected in this AI’s training still begs the bigger question: how could they — and so many other aspects of the natural world for that matter — remain so ignored by what is likely one of the most widely disseminated image classifiers in the world? It knows so many animals. It knows so many things that humans might wear, hold, ride in, and sit on: clothing, musical instruments, vehicles, kitchen utensils, furniture.

But no trees.

AIs have been outed as having blind spots before, even verging on close-mindedness and bigotry. Perhaps the most well-known instance was the Google photo tagging system (perhaps with a version of Inception at its core?) that labeled dark-skinned people as gorillas. There was also Microsoft’s chatbot, Tay, pumped full of data collected from tweets, and thus supposed to have learned like a baby how to converse naturally through its imitation of human interlocutors, that quickly showed that what the internet was teaching it to say was polemical, divisive, and often prejudiced. The import of these AIs’ affront to human dignity trumps Inception’s slight to tree dignity but if our AIs continue to be blind to trees and the many other parts of ecosystems, we will find — as we have already found many times over — that turning a blind eye (or a blind AI) towards the dignity of nature will ultimately have consequences for human dignity as well.

A tree versus this tree, a mountain versus Saana

So Theun, Antti, and I set about teaching the AI about trees, particularly the mountain birches that dotted the landscape, and also about the lichens, the mosses, and the other members of plantae and fungi surrounding the biological research center. We also included some representatives from animalia: the reindeer, the swan, and the capercaillie.

A fourth member of our team, Shah Selbe, hadn’t been able to make the trip and was working — like a Houston to our Apollo — back in LA in parallel with us. When we told him the direction we were beginning to take, he began to explore relevant work and uncovered that iNaturalist had created an image classifier that used their citizen-scientist-collected dataset to recognize a whole host more animal and plants than Inception.

Turning a critical eye towards iNaturalist’s classifier work (perhaps excessively critical as their project is to be mostly lauded), we saw that the things it focused on definitely still exhibited a kind of selection bias: the images it has trained on have been collected by people concentrated in particular areas of the world, and particular regions and ecosystems of those areas, and showed a preference for the kinds of things in nature humans often attend to. The species represented extend well beyond the charismatic megafauna that are often fore-fronted but very much remain in the space that is salient to the casual human Umwelt, which was still true for our project as well.

The training images even show a link to that Umwelt in how they had been composed: not the root of a plant but its flower, not the underside of the flower but the side we point towards us, not the anus of the fox but its supposedly sly face. Humans tend to frame their photos in consistent ways. This probably suits the purpose of developing a classifier for classifying other images taken by humans but it nonetheless reveals an anthropocentric bias.

The iNaturalist classifier also focuses exclusively on species. What about the things in the landscape, in the world, that don’t fit into that category: processes, geologic structures, symbioses, meteorological phenomena, hydrological systems, and even long and short-term organizations of those very species, like herds and predatory relationships? We hadn’t gotten to many of those either but thinking about their work made us begin to think that we should.

Inception does in fact know a somewhat finite set of geographic features: the cliff, the valley, the alp, the volcano, the promontory, the sandbar, the coral reef, the lakeside, the seashore, and the geyser. Looking at the training data for these geographic features, again we saw bias for human perspective but more importantly neither Inception nor iNaturalist knew about Saana, the distinctive mountain that looms over Kilpisjärvi. Neither knew about the particular herd of reindeer that we had seen frequent mornings when we had been at the station two years before which had yet to make an appearance. We began to see this over-generality, this non-specificity to locality, of Inception and of the iNaturalist classifier as what our project should attempt to address.

Our goal became not simply teaching an AI about trees but to teach it about its local trees, and also its local plants, its local animals, its local geography, even about hyper-local things, like Saana, that herd of reindeer, and individual lichens on particular rocks only twenty feet from where the laptop running it chugged away on the new images we provided it. We started to make an AI focused on a very particular locality, intimately entwined in the things in that locality. In this case, this was the specific little piece of arctic Finland in which we were operating but we saw what we were making as a prototype for a host of AIs spread throughout the globe, each intimately aware of and tied to the landscape in its particular locale.

A Parliament of AIs

Our project began with an AI’s hallucination. The propensity of vision-based AIs to hallucinate objects in their view that are not there clearly presents interesting jumping off points for thinking about machine Umwelts. A fair amount of both playful and earnest exploration has been done by others on AIs’ tendencies to hallucinate. Our project, however, is less about teasing an easily befuddled AI but, instead, leading it gently away from delusion towards a clearer view, one perhaps more beautiful than its fantasies.

As of now our prototype remains a vision machine, gestated from Tensorflow’s Inception V3, but it is clear that a limitation to sight would be a gross constraint and our ongoing plan is to begin to link in other streams of information about the Kilpisjärvi landscape, particularly data collected through sensors at the observatory and from the scientists’ human observations that might give the AI awareness of some of the more abstract classes mentioned above.

In as much as the AI remains a classifier, however, it remains squarely in the space of the categorization of “things” that is exactly what Bruno Latour interrogates in “We Have Never Been Modern” and then extends into his proposal for a Parliament of Things. He left the implementation of the parliament up to others and there certainly remains the question whether a human can truly be an adequate representative of all the kinds of the things in this parliament. Who or what can best vote in the interest of a birch, or, for that matter, for the air around it. Perhaps an AI with a more appropriate Umwelt might do a better of job of truly perceiving the thing’s needs and “goals.”

A future version of our locally-aware AIs could thus be the representatives for the things that are ecosystems and their constituent parts, giving them voice, maybe even identifying them as present in the first place, especially for what might easily become under-represented remote localities that humans would be more apt to neglect. The AIs could almost double as census-takers: identifying and counting the things themselves that need representation in the parliament.

Thus, what we are proposing is that our system is a prototype for a system made of a vast number of AIs, each localized to a particular place, a particular ecosystem, each tuned into that ecosystem and its very local inhabitants, its very local ebbs and flows, its very local structures. Each would work on behalf of its local ecosystem so that none are neglected, representing each in a Parliament of AIs that do not merely love trees but love every last grown and non-anthropogenic thing in their ecological district and will fight on their behalf in a distributed way, a sort of world-wide, Minsky-esque society of minds (ecologically-focused minds) that will prevent the de facto centralization of ecological decision-making that promotes the kinds of places and processes that are in the forefront of the human awareness — especially the awareness of humans from developed places — instead giving what we now begin to understand is a richly interconnected global play of systems and subsystems some protection against the subjugation of our human systems.

Unschooled

There remains, though, in our project’s trained AIs thus far, a great sensitivity to human choice, to human categorization: a supervised learning algorithm, i.e. one that learns categories or relationships based on training material that has been prepared and tagged by people, is very subject to the biases of those people, malicious or benign. The shorthand for this phenomenon that causes a system to underperform due to deficiencies in its input data is “Garbage In, Garbage Out.” Garbage — refuse, unwanted material, discarded byproducts of industry, commerce, and just plain, quotidian modern living — of course plays a center-stage role in the problem of sustainability.

The concept of garbage also is a perfect example of the shortcomings of a human bias. We have in the past miscategorized vital elements of ecosystems as garbage, notably clearing fallen trees in forests in the name of husbandry, only later understanding that those rotting trunks play an important role in the cycles of that place.

We are likely making new sorts of such mistakes now and will continue to do so. To allow our envisioned AIs to avoid this particular kind of garbage problem and other versions of the Garbage in/Garbage Out problem, our project’s next step is therefore to break our AI out from the classroom — where its schooling has been prescribed by a curriculum humans designed — and into a world where the categories are not predetermined, where it can continue its education, unsupervised. Perhaps it will chart a new path through the forest of our understanding of forests, one that like the snowmobiles we couldn’t see but, unlike them, is actually there.

Random Forests, the namesake of the initiative this project is part of, is actually itself a well-known, and once dominant algorithm often used for classification. Its forests are random collections of a different, digital, arboreal entity — the decision tree — digitally grown and pruned to suck up input at its roots, sorting it down its branches until a leaf is reached which has written on it the category the tree says fits the input.

There would be a poetry if we were using decision trees and random forests instead of neural networks to learn about the trees in the original forests, a beautiful symmetry between algorithm and subject. The dendritic shape of neurons, however, is probably morphologically enough like that of a tree to make a decent psychosculptural linkage. Nonetheless, we do hope to focus a lens on the randomness of the forests, or lack thereof, and of other ecosystems that are the subject of our inquiry, on where order, entropy, stochastic processes, and emergent pattern each play their role in the web of activity and material that is a resultant ecosystem i.e. let’s let the AI tell us whether the forest is random after all.

George Orwell emphasized the power of language to shape thought and the corollary risk of linguistic restriction’s keeping thought deliberately circumscribed. As our minds increasingly rely on artificial ones to be receptacles and auxiliaries of our individual and collective thinking, remembering, perceiving, and apperceiving, it behooves us to be careful about what we make those new minds perceive and attend to.

The human Umwelt has been expanded by our technology, allowing us to see hidden things in the heavens and in the earth, to know about and use ways of seeing and hearing that before had been the purview of other beings, to peer deep into time, and sometimes predict the future. Inception’s myopia — better its penchant for having apparitions of the artificial — evidences an alarming countervailing trend in some of our recent technology: making us see less, curtailing our expanding Umwelt, circling our senses back inward towards our own categories, our own output, towards the built and made and away from the grown and that which unfolds without us.

We have always found ways of changing the materials in our environment into our kinds of stuff: chunky, amorphous iron ore into prismatic steel beams, black goo oozing from a ragged seep into crystal clear, radially-symmetric vessels, the flickering flame of oxidation into the precision explosion of the internal combustion engine. But in Kilpisjärvi we were dealing with the perception of a world full of the trees before they are planks, rocks before they are gravel, water before it is Evian, and the AI we pointed at it already mutated it into our things, as if it was not merely making a mistake in its efforts to see the present but was instead accurately seeing the future, where all those things are indeed gone, everything converted into our kinds of stuff, where that landscape was indeed littered with snowmobiles. And devoid of trees.

--

--

Ian Ingram

Artist who builds robots that try to live in our stories about animals and to commune and communicate with wild ones || http://www.ianingram.org.