Artificial intelligence needs human ingenuity and moral

Making sense of buzzwords from basics to ethics

Fiamma Panerai
8 min readJun 5, 2018

Due to the complexity of defining AI unless it’s applied to a real-world problem, much of the internet hype is based on articles that just share views on the matter, from utopian visions to grim predictions.

But why is the public perception of AI so nebulous? Firstly, there’s no officially agreed definition. Secondly, the legacy of science fiction exists. Thirdly, is really hard to understand to you’re not a data scientist.

Artificial intelligence is a collection of advanced technologies that requires autonomy (the ability to perform tasks in complex environments without constant guidance by a use) and adaptivity (the ability to improve performance by learning from experience).

It allows machines to sense (perceive and process), comprehend (understand recognizing patterns), act (take actions based on understanding) and learn (optimize its own performance based on success or failure of those actions).

Therefore, we could say it’s about autonomous and adaptive systems that take behavioural patterns from human’s past experience and acts accordingly without having the human factor of circumstances changing due to external factors.

But, how does it work? Computer science replicates the cognitive structure of the human brain.

A neural network, either biological and artificial, consists of a large number simple units (neurons) that receive and transmit signals to each other. The neurons are very simple processors of information, consisting of a cell body and wires that connect the neurons to each other via connectors (synapse), through inputs (dendrites) and outputs (axons).

Real, biological neurons communicate by sending out sharp, electrical pulses (spikes), so that at any given time, their outgoing signal is either on or off (1 or 0), so do artificial units (nodes). They mimic the network of neurons in a biological brain. Each node receives an input, changes its internal state, and produces an output accordingly. That output then forms the input for other nodes, and so on. This complex arrangement enables a very powerful form of computing called deep learning, which uses multiple layers of filters to learn about the significant features of data in a much larger data set.

The implications of exponential acceleration of technology

Modern technologies are bypassing grand questions about meaning of intelligence, the mind, and consciousness, and focusing on building practically useful solutions in real-world problems handling uncertainty.

Take Alibaba as an example. The giant is already using combined technologies to power the future of business through IoT, robotics, 3D printing, nanotechnologies and artificial intelligence, enabling services such as Ant Financial (mobile online payment platform); Alibaba Cloud (public cloud service); cainiao (logistics branch); alimama.com (an online marketing and trading platform); image and voice search and AliMa (CS chatbot).

It’s a matter of time that the combination of modern technologies will both create and destroy business models and workforce capabilities.

Transforming businesses

Digital Transformation, as a matter of fact, is not only about technological capabilities but business strategy, culture, resources and capabilities. It should be tackled from different angles, digitalising the organisation, processes and systems, product development and customer experience.

There are many business strategists and consulting firms (McKinsey, BCG, Accenture, IBM, Cognizant…) that recommend different digital transformation frameworks to implement new technologies to business leaders.

Regarding AI in specific, Accenture recommends to implement Responsible AI mitigating the risks with four imperatives: Creating the right governance framework; creating trust from the outset by accounting for privacy, transparency, and security; auditing the performance against a set of key metrics including algorithmic accountability, bias, and security metrics; and finally the democratizing the understanding of new technologies that impact to break down barriers.

AI is already everywhere. It’s in banks, in cameras on the streets and even in social media. It can make a medical diagnosis, compose music or play chess. We use AI based applications every day, in many areas of our life from healthcare and security to customer service and shopping. However, AI applications are a rich and diverse field. The greater value will come from understanding the multitude of related technologies, and then integrating those technologies into full solutions that can work towards both work and data complexity.

Transforming workforce

In Human + Machine, Accenture leaders Paul Daugherty and Jim Wilson show that the essence of the AI paradigm shift is the transformation not only of all business processes within an organization but also in the collaboration between humans and machines.

As already shared in my previous article, the authors claim there’s a Missing Middle in the workforce that needs to be developed to unleash business value.

Developing the workforce of the future requires reimagining the work (tasks and skills needed); pivoting the workforce to more strategic value added activities; and rescaling the workforce to change the behaviour.

In fact, a new report from the McKinsey Global Institute has highlighted we’ll all need to develop higher cognitive thinking, emotional intelligence and technology skills if we don’t want to be left behind by AI.

In 2020 AI will create 2.3 million jobs, whilst eliminating 1.8 million, making 2020 a pivotal year in AI-related employment dynamics — Gartner

The present and future is the human-machine collaboration, overcoming the fear of machines replacing humans. The idea that a superintelligent, conscious (general or deep) AI will surpass human intelligence, is not impossible but quite far from being a reality at the moment.

Firstly, AI needs humans. AI methods are automated reasoning, based the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans.

Secondly, AI has its own limitations. The idea of exponential super-intelligence increase is both feared (Elon Musk, … ) and claimed unrealistic by many. Although the existence of singularity, a system that optimises and rewires itself so that it can improve its own intelligence at an ever accelerating, exponential rate without needing supervised, unsupervised or reinforced learning, optimising its own workings, it would keep facing more and more difficult problems that would slow down its progress.

Thirdly, AI fails to crack the creative nut. While computers can recombine things in novel ways and use known patterns to construct well-formed cultural outputs, they lack the human intuition or judgement about what feels right or best or interesting, fatally hobbling any efforts to produce disruptive or creative outcomes.

Neuroscientist Karl Pfenninger theorises that there is a hierarchy of nervous system functions that all humans possess, which runs, in ascending order of evolved complexity and sophistication: autonomous control (control of vegetative functions), instinct (inherited behaviour, information storage in genome only), memory (learned behaviour, information storage outside genome), language (information exchange within species), intelligence (learned adaptation, understanding of contexts) and creativity (vision of novel contexts). For Pfenninger and many others, creativity sits on top of the pile because it requires an extra leap beyond observable or available facts or knowledge and the reasoned ability to process them. By this logic, even highly intelligent and learned minds may not be creative at all — and one look at the real world provides ample evidence of this.

The Future of AI within a responsible framework

If we don’t yet agree in a single definition for AI, we’ll definitely find very different views on the future of AI. Also, predicting the future is hard but at least we can consider the past and present AI, and by understanding it, hopefully we’ll be better prepared for the future, whatever we make it turn out to be like.

For this, it’s urgent that leaders, governance bodies, and companies work towards evaluating risks with regards trust, liability, security and control and building a framework, based on expert input, on the thorny ethical and legal issues surrounding new technologies.

Following early-stage initiatives such as Open AI, the movement to maximize AI’s benefits for humanity and limit its risks already started: The World Economic Forum’s Center for the Fourth Industrial Revolution, the IEEE, AI Now, The Partnership on AI, AI for Good, and DeepMind, among other groups, have all released sets of principles which are in alignment with: designing AI with an eye to societal impact; testing AI extensively before release; using AI transparently; monitoring AI rigorously after release; fostering workforce training and retraining; protecting data privacy; defining standards for the provenance, use, and securing of data sets; and finally, establishing tools and standards for auditing algorithms.

The time is now to embrace business courage to understand and explain new technological capabilities but most importantly, design, build, and deploy them with control, accountability, transparency and integrity.

“Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less”. — Marie Curie

Other sources:

VivaTech 2018. Discussion on future of AI panel.

Google reportedly won’t build AI weapons after Project Maven controversy.

Simon Andersson (GoodAI) Unsolved Problems in AI

DARPA Perspective on AI.

Elements of AI. University of Helsinki.

Jack Gallant. Human brain mapping and brain decoding.

McKinsey Global Institute. Skill Shift, automation and the future of the workforce.

Cognizant. Get Ready for the Next 40 Months of Hyper-Digital Transformation.

World Economic Forum. Digital Transformation Initiative in collaboration with Accenture.

World Economic Forum. 3 key skill sets workers will need to learn by 2030.

Dr. Michael Bloomfield: Why Creativity Is Now More Important Than Intelligence.

Prof Erik Brynjolfsson. The second wave of the second machine age.

Max Tegmark. Life 3.0.

Paul Daugherty & James Wilson. Human + Machine: Reimagining Work in the Age of AI.

Jason Silva. 3 Exponential Techs to Watch.

Nick Bostrom. What happens when our computers get smarter than we are?

Michio Kaku. The Future of the Mind.

European Commission statement on Artificial Intelligence, robotics and Autonomous systems.

AI for Good Summit.

Accenture Tech Vision 2018.

IBM Watson. The new AI innovation equation.

Accenture. Responsible AI and robotics, an ethical framework.

PWC. Responsible AI. or PWC 2018 AI Predictions.

World Economic Forum. Centre for the Fourth Industrial Revolution.

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

AI Now at New York University.

PWC. Fourth Industrial Revolution for the Earth.

Rumman Chowdhury: Is Explainability Enough? Why We Need Understandable AI

--

--

Fiamma Panerai

ehealth & edutainment digital product development & growth marketing