Some Ethical Questions for Environmental AI Projects

From the movie Ex Machina: AI with a human face, or the computerisation of the human?

I hear you thinking — but wait a minute, what do you mean by Artificial Intelligence? Isn’t AI the voice of Scarlet Johansson in the movie Her? Or HAL in 2001: A Space Odyssey? Isn’t AI about creating an artificial entity that — a la the Turing Test — we mistake as being human? Surely there’s a big difference between AI robots and the sort of intelligence gathering systems of the British Empire in 1877.

Maybe. In appearance, perhaps. But such depictions of AI seem more about our fear of a computerised human — the reduction of our human selves to mere wires and electrical impulses — than about the humanisation of AI.

I’d also suggest the incarnation of AI, how it’s packaged, is trivial compared to AI itself. Intelligence is about the processing of information. When it comes to info processing, our human form is limited in myriad ways. It’s primed for improvement.

For example: Unless you speak the whistled language of the Spanish-speaking inhabitants of La Gomera in the Canary Island (whose whistles can be heard, at most, 8 kilometres away), the normal human voice can only travel a couple hundred meters.

Superheroes fly, not us. Without a paid ticket, I can’t see the world from outer space, let alone from above the clouds. I can’t even, like a praying mantis, see behind me. I can’t know what’s on your mind right now. Even if I called you and asked you what’s on your mind right now there would be all sorts of impediments that would make it difficult for me to know what you’re really thinking about. Nor can I give you administrative access to mine. We can’t transport our brains around the world at the speed of electricity.

“My general observation,” explains President Obama in a recent piece for Wired magazine, “is that [AI] has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture.”

This is exactly right. Perhaps the reason we don’t notice how AI is seeping into our lives is that it’s not something we recognise. Give us the shapely AI robot in Ex Machina, or a droning Arnold Schwarzenegger and his laser-red eye, and we can visualise machine intelligence. But AI is so much bigger than our individual sense of self. Even bigger than our sense of community. We’re like the villagers of southern India in 1877, for whom it would have been near impossible to recognise the impact of British IT systems on something as basic as their ability to eat and survive.

“We just don’t notice,“ says President Obama. But maybe it’s important to notice. To notice, expose, configure according to guiding principles that value our humanity.

So what does our modern AI look like then? As I mentioned earlier, I’m not sure we can recognise it. Maybe we feel it in news reports. Instead of “Bloody Battle in Afghanistan” circa 1839, it’s the US elections, Syria, massive DDS attacks via IoT devices, widespread cyber security breaches, political email hacking, leaks of personal health information, major corporate buy-outs, dreams of self-driving cars, global connectivity, missions to Mars. The skirmishes of Empire all around.

And maybe what’s being lost is the outlying aspects of our humanity. President Obama put it this way: “Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right?” (By the way, hard to imagine any other US president who possessed the capacity to think at this level). He continues: “Part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised.”

This sounds right to me. Modern AI, then, probably looks more like a piece of data visualisation. A talking map that knows the precise duration of your journey, or where the next traffic jam will be, or when a car will come to pick you up. It’s the device of authority, a thing that encourages you to change your behaviour because it knows more than you can possibly know, and if you don’t change, you’ll be left out. It’s Uber drivers doing the R&D for the robotic cars that will replace them. (Maybe — a big speculation on my part — what appears to be a rebellion against “facts” amongst many Americans is really a rebellion against authority; namely, the authority of systems that consolidate their power by owning more information than the individuals who provide that information).

So what then should be the guiding principles of AI development? Especially when it comes to AI and environmental science?

Discussing artificial intelligence: Joi Ito, Scott Dadich, and President Barack Obama photographed in the Roosevelt Room of the White House on August 24, 2016.

I asked an AI researcher who’s helping design AI systems that can visually identify wildlife from photos. He wasn’t aware of any formal research in the field, but pointed me to a fascinating article, “Cloud and Field,” about field guides in a networked age, by the brilliant media scholar, Shannon Mattern, and he asked me if I could send some examples of relevant ethical questions.

I put together a list of questions which I’ll publish here (refined slightly after further thought and discussion). Keen to hear what anyone else might think.

1) Is there at least one humanities person (e.g. an ethicist, philosopher, historian, artist, literature scholar, anthropologist, etc) directly involved in the AI project?

2) Do the users of the AI know they are also teaching the AI? Are they rewarded for it? How? Can the system recognise and reward high levels of expertise/intelligence, or is everyone treated the same?

3) Are users offered a choice between teaching AI or teaching other people, e.g. young minds, or people actively seeking to learn? Or maybe doing both simultaneously?

4) Are there ethical differences between, say, humans teaching AI how to ID wildlife versus humans teaching, say, Amazon Echo about how to select a good movie to download?

5) How do you measure success? See my notes about the measure of success in the recent Stanford University report (key criteria: “the value created for human lives”).

6) If the AI project is using data collected in the past, did the contributors of this data know the data would be used to teach AI? Was permission granted? In what ways is their unique contribution valued or rewarded?

7) Will the AI be central and homogeneous for all users, and if so, will species identification include the knowledge of marginalised people, First Nation people, obscure languages, etc? Why or why not?

8) How transparent is the motivation, ownership, any conflict of interest, commercial implications, etc, of the project and its results?

And maybe a final question, following President Obama’s advice, “what’s being done to ensure that AI’s recognition of species doesn’t take away the surprise, the flaws, the magic of learning how to identify wildlife ourselves?”

--

--