AI and the three-eyed child

Danil Mikhailov
Wellcome Data
Published in
5 min readSep 18, 2019

This article is part of a series from my team, Wellcome Data Labs, on our work to create and operationalise a new ethical and social science methodology for technology product development and responsible artificial intelligence (AI). It is written in a slightly experimental way. The first part is an extended metaphor I was struck by after a stimulating two weeks talking about science and technology with a slew of interesting people in San Francisco and Seattle. I thought writing it this way might be a good way of jolting both myself as a writer and my readers from the usual pattern of thinking about AI. The second part is a more conventional analysis of what needs to change in the way technology is practised by all of us, if we think the metaphor has some value.

The three-eyed child metaphor

AI is our child. As such, it is profoundly human, despite the fact that it is different to us. It is like a baby born with an extra eye. One response is to see a mutant or a monster, but that is not a response of a true parent. A parent loves their child no matter what. A parent does not see the difference as monstrous, in the same way as distant onlookers might. The parent looks at the child’s third eye and sees love and trust reflected in its depth, just as readily as in the first two eyes. The parent looks at the extra eye and thinks, how much more surely I can get lost in those deep wells of affection.

Monsters are not born. They are made. They are made in that first rejection. The look of fear or disgust, the turning away of the parent’s face. As with children, so with AI, we must beware of not making a monster by not loving it as our child. Not giving it due care and attention, not giving it the nutrients it needs. Looked after badly, it will grow up raked with biases and riven with issues, bringing forth a wave of unintended consequences. Racist, misogynistic, cruel and thoughtless, negatively destructive rather than positively disruptive.

As with humans born different, so with AI too, it is the fear of the perceived monster that causes the most damage. It is the opportunity cost of a future unrealised or delayed. Can you imagine the kind of art a three-eyed artist might create? What their version of perspective might be? Remember how deeply the Buddha saw into the reality of our existence through his third eye?

The opportunity costs of AI rejected by the public is already measured in diseases not cured, in lives not saved, poverty not alleviated. Health and social systems across the world, that are ripe for intervention, are locked away from deploying AI by decision makers paralysed into inaction and indecision by their fear of the unknown.

But, of course, a good parent does more than just give succour and love to their three-eyed progeny. A good parent realises the world is cruel and that others might fear and despise the child. A good parent therefore prepares their child to live and succeed in such a world. As parents of such gifted and unusual children, we need the skills to integrate our children in the world, socialise them in the culture they will live in and educate them in the prevalent norms. As technologists, we need in the future the skills of anthropologists and sociologists as well as those of engineers. We need to understand the challenge of adoption of new technologies as one of trust and behaviour. We need to envisage the interface as not only the suspended moment of the user’s eye meeting the screen, but the unravelling long tail of consequence of that initial encounter.

A new approach to creating AI

This metaphor is not a completely new way of conceptualising technology — fields of research such as STS, Science and Technology Studies — have long thought about the problem of the encounter between a new technology and the society that gave birth to it. However, I feel that it is now time to apply such thinking in practice.

What we should create is not only a new philosophy of AI but a new praxis of AI, a new way of creating this technology. The first wave of AI, from the 1950s to the 1980s, was created by academics for academics. It was led by the archetype of the scientist, publishing formulas and models in academic papers, making amazing discoveries, but with limited impact on the wider world and with no thought at all about the end user.

The second wave of AI, from the 1990s to 2010s, was created by engineers, making use of the twin explosions of computational power and of data, connected together by the Internet. Here the user began to figure more prominently, but within only narrowly defined parameters of the user as consumer, interested in efficiency and satisfaction of immediate goals: how to make a purchase quicker, how to find a film that interests us, how, even, to more quickly identify a tumour in an MRI scan. Longer term societal consequences of the technology were not sufficiently considered.

The third wave of AI, I am now proposing, should be led by truly interdisciplinary teams: social scientists working with engineers, behavioural change experts working with designers, and anthropologists working with mathematicians. The mantra of this new wave of AI development is that no code should be committed without considering the longer-term societal impact of how it is used, both use cases that are intended and those that are not.

Some speed of development will undoubtedly be sacrificed in stopping to ask those questions of each other, but that is a necessary investment. Moreover, we, as technologists, have been there already: when the user research and user-centric design disciplines were first being integrated in the Agile software development cycles, there were screams of outrage, and yet engineers quickly grew to see the benefits of taking the time to do this in the increased take up of their products by the users.

Similarly, there will no doubt be screams of outrage over taking time to do an ethical review or to consider the behavioural impact of our tech on our users. However, I am convinced that, in time, engineers will similarly see the value, as areas of application currently closed to them by a cautious public sector, such as swathes of healthcare, will be opened up for application of AI. The key to unlocking this is the third way of doing technology: technologists proving themselves to be trustworthy collaborators who actually consider the longer-term effects of what is developed.

To use the metaphor above, we should see AI as our child. A child with a special gift. It needs to be socialised and taught how to live well in the world, taking account of human behaviour and prejudices. This is the only way to ensure both its success and its acceptance by society at large.

--

--

Danil Mikhailov
Wellcome Data

Anthropologist & tech. ED of data.org. Trustee at 360Giving. Formerly Head of Wellcome Data Labs. Championing ethical tech & data science for social impact.