How to Avoid the Extremes of Anthropocentrism and AI-centrism?

AnandSRao
The Startup
Published in
4 min readFeb 14, 2021
Source: Photos by Erik Mclean and Markus Spiske on Unsplash. Morphing created using 3Dthis

‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures declared the well-known journal Nature on 30th November 2020. Shortly after, we also heard Business Insider report that “DeepMind faces criticism from scientists skeptical of ‘breakthrough’and One professor at the University of California branded DeepMind’s announcement ‘laughable’. Unfortunately, this is not the first time that we have had proponents and opponents of AI, debate the latest achievement by an AI algorithm.

Let’s first consider the viewpoint of AI proponents. At the core of all these AI achievements is a narrow, well-specified problem, with the criteria for success laid out by a group of experts. Typically, there are a number of groups, often inter-disciplinary, that work on these problems to develop better solutions. Often there are competing approaches and techniques that are vying to get to the top. The so-called ‘protein folding’ problem and DeepMind’s solution, AlphaFold, fall very much within this archetype. The Critical Assessment of protein Structure Prediction (CASP) is a community that allows research groups to objectively test their structure prediction methods and performs an independent assessment of the state of the art in this field. These meetings have been held every two years since 1994. In November 2020, a team from Google’s DeepMind used an algorithm AlphaFold 2 to achieve a level of accuracy much higher than any other group. It scored more than 90 (with a score of 100 being a perfect match) for around two-thirds of the proteins in CASP’s dataset. It is this result that has once again raised the controversy.

The word technocentrism is defined as the fallacy of using technology to answer all questions. It also denotes a value system that is centered around the technology and its ability to control and solve all known problems. Applying this technocentrism to AI we get AI-centrism — the absolute and blind faith that AI will solve all problems. While the achievement of the AI might be specific and often narrow (compared to the breadth and depth of human activities) the marketing machinery takes over and declares that AI is making a “gigantic leap” and solving “biology’s greatest challenge”. The hype overshadows the real achievement and a blind faith that AI is solving all our problems and in some cases is also ‘taking over the world’ ensues. So, we move to a world of ‘robo deus’ (or ‘Robots or AI as God’). Ironically, none of the great achievements of AI are just of AI algorithms — they are achievements of humans and machines over earlier methods and solutions that were largely manual and less complex. The complexity of the problem domain (protein folding) and the techniques used (deep learning) is out of reach to most people resulting in ‘techno worship’ (the black-box is too complex and sophisticated and cannot be understood by ordinary mortals).

The other end of the spectrum is anthropocentrism — the value system centered around humans and the belief that we are the most important entity in the universe. Often the anthropocentrists find it challenging to contest the core achievement of the AI, instead they respond to the hype propagated by AI-centrists. They often require higher performance standards than what was required in the past of humans (i.e., changing the goalpost constantly) or they expand the scope of the problem (e.g., AlphaFold says nothing about the mechanism of folding, but just the correlations between sequences and structure). Interestingly, their attack is generally more on the ‘hype’ of the AI technology and not on its core technology. When they attack the core technology it often centers around the inability of the algorithm to explain itself or it being a ‘black box’.

As a society for us to move forward we need to take three key steps. First, one of the fundamental ethical principles must be to eschew either of these extremes — AI-centrism or anthropocentrism. The human-centered AI principle may be closer to anthropocentrism and needs to be clarified with respect to the issues discussed above. Second, is to highlight human-machine collaboration, as opposed to human-machine competitions (which often plays to the fears of mankind) in our principles, thought leadership, and press releases. Third, create multi-disciplinary teams within organizations and within professional bodies by bringing in the domain experts, AI scientists and broader group of sociologists, ethicists, and philosophers.

There is a well-known quote (unfortunately the author is unknown) that states

“if you want to be incrementally better, be competitive; if you want to be exponentially better, be collaborative”.

Let’s all strive to be exponentially better by fostering collaboration between humans and AI; let’s avoid being either AI-centric or anthropocentric in our ethical values.

--

--

AnandSRao
The Startup

Global AI lead for PwC; Researching, building, and advising clients on AI. Focused at the intersection of AI innovation, policy, economics and application.