How could Artificial Intelligence be a threat to humanity?
A.I. brings impressive applications, with remarkable benefits for all of us; but there are notable unanswered questions with social, political or ethical facets.
The main concerns focus on technological unemployment, while equally important issues arise regarding potential applications of AI, the access to data and outputs of AI models. Most of the ‘dystopia scenarios’ are inspired by the following:
Lethal Autonomous Weapons
The concept of an autonomous, smart machine is impressive — think for a moment an autonomous car, which can capture its environment and dynamics and make real-time decisions, to serve in the possible best way a predefined objective: to move from point A to B.
In a military context this autonomy in decision-making becomes scary: the so called Lethal Autonomous Weapons, refer to advanced robotic systems of the future, which will be capable of hitting targets without human intervention or approval.
But, who will be controlling the design, operation and target assignment to killer robots? How such a robot will be able to understand the nuances regarding a complex situation and make life-threatening decisions?
Integrity and unbiased systems
AI systems learn by analyzing huge volumes of data and they keep adapting through continuous modelling of interaction data and user-feedback. How can we ensure that the initial training of the AI algorithms is unbiased? What if a company introduces bias via the training data set (intentionally or not) in favor of particular classes of customers or users?
For instance, what if the algorithm responsible for identifying talented candidates in the market, has inherited known or unknown biases, leading to diversity and equal opportunity issues?
We must ensure that these systems are transparent regarding their decision-making processes. This will allow troubleshooting particular cases while supporting the general understanding and acceptance by the wider audience and the societies.
Access to data, knowledge, technology
In our interconnected world, a small number of companies are collecting vast amounts of data for each one of us: access to this consolidated data would allow an accurate replay of our day-to-day life in terms of activities, interactions and explicitly stated or implicitly identified interests; somebody (or something) could know our mobility history and patterns, our online search and social media activity, chats, emails and other online micro-behaviors and interactions.
An AI system consuming this data, could accurately understand any online user — in terms of interests, daily habits and future needs; it could derive impressive estimations and predictions, ranging from purchasing interests to user’s emotional state.
If you think of this AI output at scale — analyzing data at the population level — these predictions and insights could describe the synthesis, state and dynamics of an entire population. This would obviously provide extreme power to those controlling such systems over this wealth of accumulated data.
The right to privacy is under threat, obviously when you consider the possibility of unauthorized access to one’s online activity data. But even in the case of an offline user — somebody who has deliberately decided to stay ‘disconnected’ — the right to privacy is still under threat. Imagine this disconnected user (no smartphones or other devices aware of user’s location) moving through a ‘smart city’.
A ‘random walk’ through a couple of major streets of a futuristic smart city, would be enough for the ‘network of cameras’ to capture his/her trails and possibly perform identification via reliable facial recognition, against a centralized data store.
There are obvious big questions on who has access to this information and under what conditions.
This is a critical aspect — if somebody compromises a smart system, for instance an autonomous car, the consequences can be disastrous. Security of intelligent, connected systems against unauthorized access is a major priority.
This is the unemployment which is ‘explained’ by the introduction of new technologies — the jobs replaced by intelligent machines or systems. In the years to come, we will witness significant changes in the workforce and the markets — roles and jobs will become obsolete, industries will be radically transformed, employment models and relationships will be redefined.
At the same time, technology will drive the formation of new roles, positions or even scientific specializations, while allowing people to free-up time from monotonous, low-value work, hopefully towards more creative activities.
Ethics, Responsibility and difficult decisions
AI automates processes and can make critical decisions in a real-time mode. Although in most of the cases the right decision is objectively determined and generally accepted, there are several examples raising ethical and moral issues.
For instance, an autonomous car which knows that it is about to hit a pedestrian, must decide if it will try to avoid the sensitive pedestrian via a risky (to its passengers) maneuver. And this needs to be decided in milliseconds.
The logic behind these edge decisions, must be predefined, well-understood and accepted; at the same time, the detailed history of activity and decision-making of the autonomous car must by accessible and available for analysis — under certain data protection rules.
Disproportional power and control over data
Technology giants are investing heavily in regard to artificial intelligence, both at the scientific/engineering and also at the commercial and product development level.
These big players have an unmatched advantage when compared to any ambitious competitor out there: the massive data sets describing a wide range of human activity (searches, communication, content creation, social interaction and more), in many different formats (text, images, audio, video).
As these companies try to establish a leading position in this new, under formation, AI-driven market, they acquire any tech/AI startup that manages to present promising technological innovation.
This way, tech giants not only create, but also acquire innovation from the market, which could lead to monolithic super-powers, with a unique setup of AI technologies over massive amounts of user and machine-generated data.
Avoiding the ‘Artificial Intelligence dystopia’
This technological revolution brings great opportunities for prosperity and growth — we just need to somehow ensure that the technology will be applied and used in the right direction.
We need a framework to guide the development of AI-powered applications with basic rules and those specifications which will guarantee that there is reliability, transparency and ethical alignment.
Key steps in the right direction are already happening — including the discussion for banning ALWs and also the explainable AI (XAI) and the ‘right to explanation’ which allow understanding the models used for artificial intelligence (and how they make particular decisions — which is also required by the European Union GDPR — General Data Protection Regulation).
Societies need to understand technology — and in particular Artificial Intelligence — and how it works. People need to see the opportunities — how AI is improving our lives — and also the risks from bad use of AI.
At the state level, we need a new strategy with focus on education, the markets and social systems. We also need the right rules and policies to avoid the situation of disproportional accumulation of power and control (over data and technology).
More on Artificial Intelligence
Why a good MVP is critical for a startup
Product development is of critical importance for technology startups: given the limited budget typically available to…
Follow ‘The Innovation Machine’ and discover novel concepts, ideas & trends