Social Concerns About Artificial Intelligence

Sharad Gandhi
36 min readMay 1, 2018

--

Preface: We published a new article in February 2024:

Fears & Concerns — Generative AI

This complements the article we wrote in 2018. It adds a list of other concerns and fears regarding AI.

======

Artificial Intelligence generates fascination and fear. Many, like us, are amazed at the benefits AI can deliver for businesses and consumers. However, AI will also significantly transform our society. Radical changes are scary and generate social concerns about the future of job and work, ethical issues of right and wrong, and fears of new dangers that AI brings. We believe that in spite of our enthusiasm for AI, we must address these concerns and facilitate an open discussion that allows an exchange of opinions.

Is my job likely to be taken over by AI? …

What will happen if masses of people are out of jobs? …

Can I trust AI to make ethically correct decisions? …

What if AI gets out of control posing a serious danger to humanity? …

How can we trust AI if we cannot understand how AI arrives at a decision? …

Ever since we published our book “AI&U” https://www.amazon.com/AI-Translating-Artificial-Intelligence-Business/dp/1521717206 questions like these have often come up during our presentations and discussions on business benefits of AI and how to conceive and start AI projects. We (the authors) believe that these are serious issues in the minds of quite a few decision makers and employees in the process of committing to a project. Moreover, due to frequent references to these concerns (“dangers”) by popular media, the general population has become quite nervous and worried about the impact of AI and automation on the workforce, lifestyles and on the very fabric of our society. These concerns are real, and since they exist, they need to be discussed in order to minimize the underlying fear in the minds of people. Hence, we want to express our views on them.

We can organize these concerns about AI and automation in three categories:

1. Jobs — what will happen to my job?

2. Ethics — can I trust AI to decide ethically?

3. Dangers — will AI become dangerous to humanity and me?

Before we address these areas, let us look at the fundamental role of technology in the human society. Any new technology is an outcome of human innovativeness. Throughout history, humans (Homo sapiens) are driven by a need to innovate solutions to problems. These innovations result (often by accident) in a technology that changes the way in which something is done, such as an air conditioning to keep the room cool, or enables us to do something that was never before possible, or even imaginable, such as lighting up a room with the flip of a switch. Technology is by its nature disruptive of existing ways of doing something. Those who possess and master a technology inherit an advantage of doing something better, faster, cheaper, easier, or more effectively than the have-nots. This is true for all technologies: from making tools from bones or stones, making fire, growing crops, producing gunpowder, forging metals, navigating the seas, flying, driving automobiles cars, producing nuclear power, using computers, launching satellites, using the Internet, launching guided missiles and yes, leveraging AI and automation. Being disruptive is the basic nature of technology. Like a two edged-sword, it creates new ways and opportunities, but it also obsoletes existing methods and processes. Technology, disruption, and change go hand in hand. Any technology by itself is neutral — all technologies have both good and bad sides. They solve problems, but also create new ones. Nuclear energy has provided electricity for hundreds of millions of people around the world, but has also created major catastrophes — as events in Chernobyl in the Ukraine, and Fukushima in Japan have shown. Antibiotics prevent millions from dying every year, but have also led to multi-resistant bacteria as a hospital epidemic problem.

Moreover, technology cannot be held back. It can be regulated and controlled to a limited extent based on some ethical principles, but never be excised from the world. If one country or society were to ban it, due to the global nature of technology, some other country, society, or a person can still pursue it. This can be seen today for chemical weapons, stem cells, and cloning technologies. Since new technologies often result in significant commercial benefits, it is almost inevitable that some company in the world will exploit them for their business advantage, even if many consider them harmful.

1. Jobs

The number one AI-related concern of most people is the impact of automation (AI and Robotics) on jobs — their own and of others around them. This is a very valid concern since for most people their job means a lot more than just work and a source of income. In our society, it can add up to much more: reputation, sense of worth, a source of meaning, and status in the society. The impact of losing one’s job is very emotional and material and can result in a spiral downwards into poverty and depression.

Every new technology in history has had an impact on certain types of jobs — both positive and negative. Some jobs have been severely affected, some partially and some not at all. However, far more new jobs have always appeared due to new technologies. More people have jobs (ca. 5 billion) in the world today than ever before in history, despite the eight-fold increase in world population, from less than 1 billion to 7.6 billion in the past 250 years, and despite so many new technologies emerging since the industrial revolution. In recent times (10–40 years) progress is exponential or super-exponential over a large range of industries. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3585312/

For a human mind, it is not easy to imagine how, where from, what type, and how many new jobs can emerge due to a new technology. It is much easier to imagine what jobs can disappear. The last 40 years have seen a rapid growth of several groundbreaking information technologies: the personal computer, software, servers, Internet, social media, data centers, cloud computing, apps — just to name some — having a major impact on all industries, our society and lives of people. These have created major shifts in manufacturing, transportation, and commerce via globalization.

https://www.theatlantic.com/business/archive/2012/01/where-did-all-the-workers-go-60-years-of-economic-change-in-1-graph/252018/

https://www.theguardian.com/business/2015/aug/17/technology-created-more-jobs-than-destroyed-140-years-data-census

A vast number of jobs have moved across the world and have been eliminated in the process — office secretaries, accountants, administrative staff, bank cashiers, and helpers etc. However, an even greater number of new jobs have emerged — programmers, hardware designers, consultants, web designers, shipment managers etc. And since the economy has grown, even the volume of various jobs has grown.

So why is it so different with AI? We believe, there are several reasons. Earlier technologies were positioned and seen as helpers and tools to help the workers. They were not seen as a risk to their jobs. They were not perceived as something taking away their jobs — even if they did for many. Media and science fiction have positioned AI as something (or someone) that can substitute and replace your entire job, not just some of your tasks. It is almost as if a new super species that is far smarter, more efficient, cheaper and more accurate has been created to replace you in your job.

1.1. Jobs are Built Around Tasks

Every job consists of multiple activities or tasks. A new technology generally automates or simplifies one or more tasks. This transforms the job role, which then includes new tasks, technology modified existing tasks, or a broader responsibility. This transformation of jobs is an ongoing process.

In the 1980s a typical secretary’s job had multiple tasks: correspondence — checking the letters received, typing letters and documents, scheduling and arranging meetings, organizing trips — booking tickets, hotels, taxis, etc., organizing events, procurement, the filing of documents, arranging visitor’s trips, etc. With computers, the Internet, and mobile computing becoming mainstream over the past 30 years, most of these tasks are done routinely by managers and employees using email, meeting schedulers, online booking, cloud services and mobile devices. The role of secretaries has transformed into that of administrators, where they are essentially organizers and coordinators of events and administrative functions for larger groups. Some of the original tasks such as organizing events, procurement and taking care of visitors continue, and some new ones develop over time. Most of the tasks leverage new digital technologies, making the administrators more productive and efficient, and hence able to take on more and broader responsibilities. This is true for many job functions. Some tasks remain and new ones develop over time and all supported by new technologies, transforming the job role.

Let us look at the transformation of another job role, that of a travel agent. In the 1980s, booking flights, trains and hotels formed the bulk of most travel agents’ tasks and revenues. They also did some trip and group travel arrangements. With the advent of online search, comparison, reservation and booking, online payment, and messaging apps most people prefer to book their flights, trains, hotels, etc. for business and private travel themselves. The Internet has given travelers far more choice, control and flexibility compared to booking via travel agents, who have essentially lost those tasks in their portfolio. However, many travel agents have upgraded their offerings to package tours, guided tours, specialized offerings for various sporting activities such as skiing, biking, sailing, safaris, culture tours, etc. Some also offer event management services, medical holidays, and travel for the elderly. Essentially, they have moved from generic bookings to specialized added-value services. Jobs become redefined around new or technology-enhanced tasks.

We can build similar cases for many traditional jobs, for example tax consultants, banking professionals, insurance agents, business consultants, sales, and marketing professionals. All these roles have transformed as a result of Internet and computing technologies. The scope of traditional job functions has been replaced by a new scope that leverages new technologies and offers new services that are needed. This does not mean people have not lost jobs. Many have lost their jobs — like the travel agents who did not adapt and learn new tasks when their customers started booking their flights online.

1.2. AI Will Transform Job Roles

Automation and AI will drive this transformation harder and faster. Many tasks that currently require external human services will be automated. Eventually, all the mechanical, mechanizable, repeatable things are going to be done way better by machines (AI and robots). This includes tasks involving repetitive decision-making tasks — e.g. granting loans, assessing insurance claims, screening employees for hiring, etc. They will be automated via AI. AI lets us automate non-repetitive, non-mechanical tasks as well, such as natural language queries. All this will clearly have a significant impact on many so-called “white-collar” jobs, including senior positions. Many of these jobs will get transformed with new tasks enhanced with AI features. In reality, this transformation is not abrupt, but a gradual process and not everyone will be impacted at the same time. All countries, industries, and businesses do not adopt new technologies at the same pace, as is witnessed with computing and Internet. Finance and retail industries have been leading and others like healthcare and social organizations have lagged.

However, many jobs will continue. Wherever human touch, human judgment, human understanding, and empathy are needed, humans will remain irreplaceable — even leveraging automation in their jobs. Doctors will use the best AI tools for diagnostics (already happening for X-ray and Computer Tomography analysis) and monitoring, but the human doctor’s role will remain crucial and unique for talking to the patients, explaining the diagnosis and the treatment, and comforting them. Similarly human roles in nursing, child-care, elderly care, and education will continue. However, Japan is already experimenting with robotic care for the elderly because they will not have enough young people to fill the need. We cannot imagine humans not playing a primary role in areas such as entertainment, sports, vacations, hotels, and restaurants, etc. where humans prefer humans for interaction, humor, and relationships. In spite of major transformations, we will have salespeople, marketing experts, and customer interaction professionals. We will have people writing software, designers of devices and products, human lawyers, and tax consultants. We can imagine many of today’s jobs existing in the future — with changes in the associated tasks. In the medium term, we can envisage software writing software from specifications, some lawyers being replaced for due diligence tasks. We see that tax consultancy may be vulnerable to AI, as it is a codified set of rules.

1.3. New Jobs — a Natural Evolution

Many new jobs will emerge. It is always extremely difficult to imagine new jobs and tasks that emerge as a result of new technologies. Who could have imagined in the 1980s the new jobs and roles that emerged due to computers and the Internet? Website developers, apps programmers, chip designers, PC and mobile devices builders and distributors, peripherals makers, IT managers, social media monitoring, data center management, cyber-security, call centers, blog writers, Uber drivers, Airbnb hosts, drone pilots, online ad agents, Netflix programmers (… a never-ending list). However, all these emerged, even though they were unimaginable, as a part of technology evolution. Now all this has become a new normal. Youngsters of today can hardly conceive a life without all this. We believe that the same process of evolution will create new jobs in the age of automation even if they are unimaginable at the moment.

That said, some new jobs needed for building the AI infrastructure are easy to imagine even today:

· AI system designers

· AI machine educators (subject matter experts)

· AI system manufacturing

· AI maintenance

· AI to human interfacing experts

· AI security and privacy management

· Data management for AI

· Data collection, provisioning, and refining

· Sensors and interfacing

Popular phrases: “Data is the new oil” and “Data is the fuel of AI” provides us with an excellent analogy from the oil industry. Oil used to be a useless, sticky liquid in the ground. With the invention of the internal combustion engine and oil heaters (using fuel in the form of petrol, diesel, or kerosene) demand for oil started growing. Suddenly the value of a useless thing started rising, giving birth to the massive oil industry we know today. Many interconnected specialties developed, each on a gigantic scale with huge employment: oil exploration, onshore and offshore oil-well digging, oil well operations and maintenance, oil shipping and storage, oil refining, petrochemical transport and storage, and lastly, gas stations to provide energy to cars, SUVs, and trucks. It also created multiple secondary industries for plastics, heating and aviation fuel. Similarly, like oil in the ground, vast amounts of data are being generated by people, machines, processes and businesses that is unused today because we do not know what to do with it or how to generate value out of it. That is changing with the birth of Big Data, Analytics, Robotics and AI. These technologies need a vast amount of clean data to work and deliver value for a business. Like with oil, we foresee a huge data industry developing with the expertise to extract, manage and deliver data for AI. It will create new jobs involving:

· Identifying data needs for AI functions

· Configuration of needed data for AI

· Sensors for data

· Collection of data

· Refining and cleaning data — synchronization, duplication and standardization

· Generating validation datasets for AI training

· Sourcing of data to AI networks

· Managing security and integrity of data

· Training neural networks

· Calibration and certification of AI accuracy

1.4. Future of work

There are many reports published on the future of work and employment. Some are very pessimistic and paint a doom and gloom picture of masses of people without jobs in the future, while others are very rosy. McKinsey Global Institute published a report “What the future of work will mean for jobs, skills, and wages” in November 2017. We find this report well balanced because it considers multiple parameters affecting future employment — demography, population, educational systems, growth, wages and social systems:

https://www.mckinsey.com/global-themes/future-of-organizations-and-work/what-the-future-of-work-will-mean-for-jobs-skills-and-wages

As the report says: “in about 60 percent of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers.” However, “Very few occupations — less than 5 percent — consist of activities that can be fully automated.” In other words, 40% of occupations will not be affected at all and only about 35% of the tasks in the other 60% jobs can be automated.

The McKinsey report makes some very insightful points:

1. While technical feasibility of automation is important, just because a task can be automated does not always mean it will be. Technical feasibility is not the only factor that will influence the pace and extent of automation adoption. Other factors include the cost of developing and deploying automation solutions for specific uses in the workplace, the labor market dynamics (including quality and quantity of labor and associated wages), the benefits of automation beyond labor substitution, and regulatory and social acceptance. Automation of a task in a company will be implemented only if it results in a significant business advantage. These factors will limit the overall pace of adoption of automation across countries and industries — as we experienced with the adoption of computing and the Internet.

2. Taking these factors into account, up to 30 percent of the hours worked globally could be automated by 2030, depending on the speed of adoption. Results differ significantly by country, reflecting the mix of activities currently performed by workers and prevailing wage rates.

3. It is important to note, that even when some tasks are automated, employment in those occupations may not decline, but, rather workers may perform new tasks.

4. Automation will have a lesser effect on jobs that involve managing people, applying expertise, and social interactions where machines are unable to match human performance for now.

5. Jobs in unpredictable environments — occupations such as gardeners, plumbers, or providers of childcare and care for the elderly — will also generally see less automation by 2030.They are technically difficult to automate and often command relatively lower wages, which makes automation a less attractive business proposition.

6. Upcoming workforce transitions can be large. By 2030, between 400 million and 800 million individuals globally could be displaced by automation and need to find new jobs. 75 million to 375 million may need to switch occupational categories and learn new skills. However, this will depend on the type of jobs, industries, and country of employment.

7. Other factors that determine the impact of automation in a given country are demography, economic growth in the country, growth in the workforce, quality of education system, wages structure and the economic structure. They will make a big difference in the impact.

8. The report predicts that in Germany about 3 million people will have to change occupations by 2030. But rather than rising unemployment, it will actually face a shortage of people needed for work by the year 2030 — mostly due to a population decline of 3 million and a workforce reduction due to an aging demography.

9. In contrast, a country like India is a fast-growing, populous developing country with relatively modest potential for automation over the next 15 years, reflecting low wage rates, high manual labor and agricultural employment. Most occupational categories are projected to grow in India, reflecting its potential for strong economic expansion. However, India’s labor force is expected to grow by 138 million people by 2030, or about 30 percent. India could create enough new jobs to offset automation and employ these new entrants. China has higher wages than India and so is likely to see more automation. China is projected to have robust economic growth and will have a shrinking workforce, like Germany China’s problem could be a shortage of workers.

10. The United States could also face significant workforce displacement from automation by 2030, but the projected future growth — and hence new job creation — is higher. It has a growing workforce with innovations leading to new types of occupations and work to roughly balance this.

Labor displacement is a situation that can be forecasted and planned. However, multiple organizations and institutions in a country need to work collaboratively to meet the requirements for a planned future employment scenario. Their combined responsibility is twofold: reduce automation’s negative impact on the workforce and the society, and simultaneously ensure that the society and the country benefits economically and socially from automation. This requires a productive collaboration by the country’s politicians to make meaningful laws, which encourage innovation but prevent abuse; the education system to prepare students for future oriented jobs; and the industry to provide the right training to develop new skills for their workers. For the society to benefit from automation it is essential that the workers, whose roles are made redundant by automation, get timely access to trainings that develop new skills fast enough for future occupations. All this is, no doubt, a complex challenge for any country.

New technologies have historically increased productivity in the long run. AI and robotics are bound to increase gains as a result of this. Who benefits from these gains? Will it be a few enterprising companies and individual entrepreneurs? Can the society and the country as a whole benefit? What are the mechanisms for distributing the resulting wealth? What concepts are viable — like universal basic income — to ensure a fundamental livelihood for those who do not benefit? These are the challenges that will have to be addressed soon to ensure a smoother transition. Many governments and corporations have woken up to these challenges started the process of finding appropriate solutions.

In developing countries like Bangladesh, where many earn their living through the export of garments and other goods to Europe and USA, workers may suffer in future if automation allows industrialized countries to manufacture these goods themselves at low cost. Services that can be automated using AI, such as compliance checks, speech to text conversions, service calls and insurance claims may no longer be outsourced to cheaper-wage countries. Automation in these countries is not likely to develop soon due to very low wages. As a result, new employment opportunities due to automation may not happen soon enough in such countries to replace lost jobs. Will developed countries be in a position to help out? What other options are open for developing countries to benefit from globalization? These are some hard challenges.

2. Ethics

Can AI distinguish between right and wrong?

Can AI be taught ethics?

Can the ethical standards of AI be compatible with those of our society?

Can AI be made legally liable for faulty decisions?

Before we get into these questions let us look at what ethics is and how we humans deal with ethical issues in the process of making decisions.

Ethics is about values — what is right and what is wrong. There are never-ending philosophical and religious debates about what is right and wrong. However, in our view, ethical values are subjective. Each society and culture has its own set of ethical values, which become strongly embedded in the belief system. Anyone who has lived in multiple cultures — e.g. in India, Germany, Japan, Egypt, Nigeria, Brazil or China — knows how different some of the ethical standards and value systems can be. Most ethical standards in a society evolve over centuries — mostly independently, with some inter-cultural crossover — and become ingrained in the natural fabric of the society. We learn ethics in the process of growing up as a part of the value system acquired from the family, friends, schools and the society. Later, as an adult, we can further develop and refine our personal ethical sense from situations we experience. The acquired ethical values also color our opinions and bias on race, religion, gender, practices, and countries, etc. These values play a key role in guiding us in making judgmental calls in our decisions.

2.1. Decision Bias

Deep learning involves training AI systems using examples of how humans have made decisions in a similar situation. These examples implicitly contain the biases and ethical values of the humans involved in those examples. Hence, AI system’s decisions reflect the human bias. However, humans are not all the same. If the examples used for training AI were all of Italian men and their decisions on hiring people for a certain job, the AI decisions will reflect the bias, attitudes, and opinions of Italian men relative to other countries, women, race, etc. for that job. The same holds for other decisions such as granting loans, insurance claims, criminal offenses or even medical diagnoses. Interestingly, this is no different than when training humans. If those decisions of Italian men were used to train a human being, the biases in those decisions would also be transported into the trained human. If the same training were to use the decision bias, attitudes and opinion of Italian women, or Iranian men, or Chinese women, the decisions will be different. It is extremely difficult to create deep learning training that is neutral and independent of all human biases. One approach for bias neutrality is to use decision data examples from humans with a wide spread of cultural and social background.

Even when there are strict, well-defined procedures for making decisions using all necessary data inputs, as required by many businesses for strict compliance, it is often a human tendency to apply additional subjective judgments for individual decisions. This is generally true for experts who tune their decision-making process based on years of experience and their familiarity with what they have been good at. It is commonly observed in the medical practice. https://www.ncbi.nlm.nih.gov/books/NBK63655/. No wonder that different medical specialists make differing diagnoses and recommend different treatment plans for the same situation. This is the wisdom behind obtaining second opinions before undergoing an operation or an expensive treatment. Individual expert opinions also vary based on earlier decisions made, their level of fatigue, distraction, mood, up-to-date knowledge and other factors. In contrast, a well-trained AI system makes decisions that are mostly more accurate and consistent with its training. It always takes all data inputs into consideration, does not get tired, is not influenced by earlier decisions (unless it is a part of a continuous learning scheme or is a recurrent neural network), has no moods, and does not get distracted by other events or thoughts.

Today most AI systems are trained for a very specific task — such as the diagnosis of cancer, assessing an insurance claim, recognizing faces or speech, etc. The focus on the specific task has placed ethical elements of the decisions at a lower priority leaving this area of research with insufficient time and resources. However, there is no reason why this cannot be changed, if that is desired and is a priority. Obviously, a deep learning AI system can only acquire and express ethical values that are implicit in the training examples used. Ethical training becomes the job of the trainer and depends on the selection of examples used for training. This is quite similar to the human case where the parents, teachers, peers and the society are collectively responsible for the ethical values of a person. Fortunately, humans are capable of independent thought, they may be influenced by their upbringings and surroundings, but they are not slaves to them. However, it is much more difficult to predict the ethical output of a human decision maker compared to an AI system. A vast number of scandals every year involving violations of ethical standards by well-positioned people in almost all professions — political, medical, financial, industrial, etc. — is evidence of the unpredictability of human ethical standards.

2.2. Understanding AI Decisions

Accepting a decision is much easier if we understand the rationale behind the decision. A doctor normally explains why she made a diagnosis — what are the key symptoms, indicators, and medical background — and why the recommended treatment makes the most sense. This makes us feel comfortable with the decision. The same holds for expert decisions made for investments, purchasing real estate, legal advice by lawyers, accounting support by tax advisors and hiring decisions by hiring managers. The rationale behind a decision gives us a sense of understanding and a feeling of commitment to that decision. However, just because an expert is able to give a rationale does not always mean the rationale is always correct or comprehensive. In reality, many more factors go into a decision and only a handful are explained; many, such as the intuitive feel or deeper technical issues, cannot be even be articulated. Often a feeling, past experience, or just trust in the expert is sufficient to make us feel comfortable with the decision.

We feel uncomfortable, unsatisfied, and even question the decision, when the rationale is either not given, missing, or is unconvincing. This happens in the case of AI decisions. In general, a well-trained AI makes useful and consistent predictions and decisions, based on the given situation and the data inputs provided. Unlike expert and rule-based systems, which are able to explain why they recommend a certain course of action, a deep learning AI system cannot (yet) provide a rationale or a justification for the decision. It can only tell you what the data inputs mean, but cannot tell you why. This face belongs to George Clooney, but cannot say why; this image represents a cancerous tissue, but cannot say why; the data pattern means this person is a great hiring match, but cannot say why; it can tell an autonomous vehicle when it is safe to pass a car in the front, but not why it is safe; the lip moment and gestures mean the person is saying, ‘we have a problem,’ but not why they are saying that. Even though there is no rationale for the decision, we often develop trust without questioning if we experience long enough that the decision made by AI is consistently accurate and useful, in which case we let go of our need for a rationale. If Google Maps consistently predicts the traffic situation, route and the arrival time accurately, then we start trusting it without wanting to know why it decided that route. Google Maps does try and explain some route choices “You are on the fastest route despite usual traffic.”

For cases such as recognizing faces, speech, and objects, if there are no significant consequences resulting from a wrong decision, it is acceptable not to have a rationale, as long as the AI predictions are consistently accurate enough (and better than most humans). However, for mission-critical cases, where the consequences of a wrong decision are severe, we need much more caution and additional verification. For example, in the case of a medical diagnosis, if you believe that AI made a wrong decision it is impossible to know why — even if legally a doctor is required to explain the rationale. In such cases, it is good to have a human doctor working with the AI to include a human verification element in the diagnosis. The doctor, besides being responsible for the diagnosis, also plays a significant role in explaining the diagnosis to the patient with the appropriate empathy and feedback of how the patient responds to the news. We believe teamwork of human plus AI is a good model for such situations. AI can play the role of the expert and the human as a verifier and an interpreter, similar to the roles of R2-D2 (the expert) and C-3PO (the human interface) robots in the classic Star War movies. This teamwork is obviously not possible for AI deployed in real-time systems, such as autonomous (driverless) vehicles where there is hardly any time for someone to double-check and verify each AI decision. In many cases, a human may not even be capable to cope with the volume of data used in AI decisions.

2.3. 100% Ethics Compliance for AI

Is it possible to design AI to be always ethical? Ask anyone a similar question — ‘can you guarantee to be 100% ethical in all situations?’ We all know that 100% compliance in all situations is an illusion. Our experience shows that everyone has their personal deviations from their own ethical standards in certain situations or moods — even if it is in minor areas like speeding or a white lie. However, a well-trained AI is consistent in its decisions — it delivers good value, even if not always correct. Today, ethics does not happen to be a major priority for AI developers, but is likely to become a big differentiator over time. We believe that over time, AI systems that are designed to be ethical will become more valued and desired. This is just like with computers of the past — we all lived with Windows PCs for many years — although they were not always reliable and often crashed — because they delivered sufficient value. Eventually, they became reliable. We believe that new methods will be developed to make AI ethical as per human requirements. Maybe someday ethics will become an integral part of a foundation education for all AI algorithms — just like for children.

2.4. AI Liability and Legal

Who is to blame if AI makes a faulty decision? Who is legally liable if an AI diagnosis turns out to be wrong and the patient dies? The doctor, the hospital, or the AI developer? Who is liable if an autonomous vehicle hits and kills a pedestrian? The owner, the vehicle manufacturer, or the AI developer? Who is liable if an AI financial advisor results in a major loss for the client? All this is at the heart of the legal debate with lawmakers, professionals, and lawyers all trying to develop a common legal position on AI decisions.

In our view, AI is an artifact of humans. Humans create it as a decision-making expert helper. The responsibility for its decision — right or wrong — has to be a human being — either its user or the manufacturer. The user of AI must be aware of the risks for using it, just like for any other tool or gadget — a drill, car, computer, or a gun. If the user uses it inappropriately, without proper training, or in a wrong context than the user carries the liability. If it can be proven that the AI did not work as specified and caused damage, the manufacturer (the vendor, or the developer) is liable. One can assume that cleverly worded fine print in the User Agreement will significantly reduce manufacturer’s liability and make the user liable in most cases.

Many misconceptions about liability come from personification of AI as a responsible person that can be held accountable for its errors. AI is just a machine, a tool. It can be viewed as an expert working for us helping us to make good decisions. It is not infallible. It should always be assumed that AI can and will make faulty decisions at some time or in some situations (like any other human expert) — because its decision-making is predictive and probabilistic. The quality and accuracy of AI depends on the quality of its training and the hardware used. AI cannot be made legally responsible for its decisions. Final responsibility and liability must be with a human being.

3. Dangers

Will AI become dangerous to humanity?

Will AI be used with evil intentions?

Will AI degenerate our decision-making ability?

Will AI become a major risk to privacy?

Will AI make us lonelier as humans?

3.1. AI Going Rogue

Prominent scientists and technologists such as Elon Musk, Bill Gates and Stephen Hawking — along with a number of others — have been openly talking about the risk of AI becoming unmanageable, rogue and eventually, a danger to humanity. Elon Musk’s frequent warnings about the dangers of artificial intelligence have touched on their possible use to control weapons or manipulate information to fan global conflict. This stage of AI is also referred to as “Singularity,” when machine intelligence becomes equal to human intelligence and is on a track to “Super-intelligence” — AI machines being significantly more intelligent than human beings. Elon Musk has clarified that the danger to humanity is not due to the Narrow AI of today — which he himself leverages in his Tesla cars. He believes, that General AI and Super Intelligence can get out of control and harmful to humans. How will super-intelligent species of AI systems treat humans? Just like we treat animals, exploiting them as resources? Will they be manageable by humans — or will they completely control human beings? These are indeed frightening scenarios — often illustrated in science fiction movies like Terminator and Matrix. The physicist Stephen Hawking had warned that AI could “supersede” humans when they exceed our biological intelligence. It is indeed difficult to find satisfactory answers to these questions in order to comfort everyone’s fears.

3.2. AI in Warfare

Many countries increasingly use AI for military applications. Most experts expect AI as the key “weapon” in coming wars. It is easy to imagine a few applications of AI in warfare: AI to scan thousands of satellite, lidar and other images for patterns to predict and detect suspicious enemy movements and activities; AI for a missile tracking and finding the best target. In fact, any predictive decision-making by humans can be delegated to AI if the necessary data is available. The US already uses AI in its planning system, DART: https://www.quora.com/What-are-military-applications-of-AI.

In military terms the stages of enemy target engagement using weapons are: Find, Fix, Track, Select, Engage. Finding the target, fixing on it, and tracking it can be significantly aided with AI. When it comes to selecting the exact position to target and engaging (firing), that most ethical issues flare up. The authors are firmly of the opinion that autonomous weapons (those that autonomously decide on what to fire and when — without human intervention) must be banned. A human must be in charge of making the final call and taking responsibility for all the consequences of using such a weapon. In March 2018, Wired Magazine asked Emmanuel Macron, President of France: Do you think machines — artificial intelligence machines — can ever be trusted to make decisions to kill without human intervention? He responded very clearly: “I’m dead against that. Because I think you always need responsibility and assertion of responsibility. And technically speaking, you can have in some situations, some automation which will be possible. But automation or machines put in a situation precisely to do that would create an absence of responsibility. Which, for me, is a critical issue. So that’s absolutely impossible. That’s why you always need a human check — and in certain ways, a human gateway. At a point of time, the machine can prepare everything, can reduce uncertainties and that’s an improvement which is impossible without it, but at a point of time, the go or no-go decision should be a human decision because you need somebody to be responsible for it.” https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/

3.3. Humans Against Humans — the Biggest Danger

History shows that we humans, using technology, have committed the worst acts of violence, crime, brutality, and injustice against other humans. Evil intentions and egoistic goals of humans can unleash the power of AI against other humans — posing the biggest danger of all. Humans have used chemical and nuclear weapons with full awareness of their destructive power on humans. Humans use the power of the Internet to radicalize innocents into terrorists and to manipulate elections. Humans use technology for determining the sex of an unborn child to abort female embryos. Humans in high positions of German car manufacturers have manipulated diesel car exhaust data, despite of knowing the harmful respiratory effects on 100s of millions of humans in the world. Humans manipulated research data on sugar and cholesterol for the greed and higher profits, ignoring health consequences for millions of humans. In fact, humans have always found ways of using technologies to harm other humans. We humans prefer to blame something neutral, like technology, to downplay our role in pulling the trigger, putting the blame on the gun. Hence, should we be surprised if, at some stage, some humans deploy AI to harm other humans for their gains, ambitions, and power? This already happens. In particular in the form of privacy breaches, but also cybersecurity: https://hbr.org/2017/05/ai-is-the-future-of-cybersecurity-for-better-and-for-worse. What can be done about this? Technology itself cannot be stopped. At best we can try to regulate its use, which several organizations are in the process of doing. However, that still cannot guarantee 100% compliance for a global technology like AI. Efforts to regulate nuclear and biotech have also resulted only in a partial success.

Humans have an amazing, but immensely underestimated, creative potential that enables us to discover innovative ways of preventing such situations — even if we do not know how exactly. Driven by our fear, we tend to overrate the real power of AI — which actually addresses just a subset of human capabilities. Our emotional, rational, collaborative and creative capacity coupled with our imagination of what can be done adds up to far more than what can be achieved through machine intelligence. Elon Musk and Co. have come up with ways to address this threat by opening up AI research and development to everyone in order to enable all minds to work collaboratively and prevent someone from having an exclusive advantage leading to harm. He is also working on a scheme to directly link the brain with AI in order to tap the powers of AI transparently and seamlessly from our brain and uplifting our mental capacity. According to him, this is a way to uplift our mental capability to always exceed that of AI. We believe that in the end, humans will prevail and leverage the power of AI for our good. We definitely need to be aware of the threats, but should not accept defeat before the event. Attempts to hold back AI technology can only result in someone in the world, who cannot be stopped, from developing it exclusively and using it against humans.

3.4. Losing Our Ability to Decide?

Every technology forces us to learn new skills and renders certain skills irrelevant or unimportant. We no longer must know how to hunt for a living or understand which berries are good or poisonous. Thanks to automobiles, we have lost our horse riding skills. Calculators and computers have diminished our arithmetic skills; mobile phones have reduced our ability to memorize (phone) numbers and addresses; many of us already have problems handwriting on paper, because we mostly type or touch-type — even that may atrophy in near future, as we will mostly talk to computers. So we have been losing skills all along because technologies have incrementally taken over, or altered the need for, many basic abilities. Decision-making leverages our cognition skills — judgmental, intuitional and rational thinking abilities — that have proved to be the most critical primary survival skill. We are in the process of delegating decision-making to AI for many activities. AI will recognize people for us, automatically assess new situations, make financial and commerce decisions, automatically translate languages, drive us from point A to B, hire the best people, figure out the best way to file taxes — and so on. Are we at a point where all decisions are made by AI, rendering our decision-making into a non-critical luxury skill?

Carlos E. Perez, author of The Deep Learning AI Playbook writes about what human addiction to convenience means: “We need to recognize that humans and their tools are inextricable. The vast majority of the activities that we love doing involve tools. We are distinct in our ability to invent and make use of tools. Cognitive tools however perform the thinking for us. Whether its GPS, Google, Autosuggest or spell checking, each time we use these tools is a time we deprive our minds from exercising the replaced skill (i.e., navigation, memory, writing, spelling). We can extrapolate this loss as more advanced AI come on-line. The convenience of AI is a double-edged sword, on one hand it allows us to do more and on the other hand it allows us to use less of our mind. This is the curse of convenience that slowly erodes our cognitive capabilities.”

We believe, no matter how many activities of ours get automated using AI, there will always be higher-level activities that will challenge us and demand our unique decision-making and planning skills — a complex hybrid of intuition, rational, emotional and imaginative abilities. We will push our decision-making to a higher level in the hierarchy of activities — even if we do not know what these are at the moment. It is a common human tendency to see the road ending at the next landmark until you arrive there and see the road continue. An end is an illusion.

3.5. End of Privacy?

Data is the fuel of AI. To make AI work effectively, accurately and in real-time, an enormous amount of data will be needed — about everything and everyone. Absolute privacy requires that data about my things, my activities and myself is never accessible by anyone without my permission and knowledge. It is a big challenge today to be sure of this. We know that this is no longer possible in our society where we all use services that need sharing some data. Data has become a new currency. Companies like Google and Facebook offer you many free services in exchange for your data. Data theft, illegal access and misuse of data have become a lucrative business just because data has risen in value. This makes protection of data — when it is stored, transmitted, or at the point of use — a complex challenge. We are often willing to give away some of our data for the convenience of using the Internet and AI services. Many useful and interesting services — such as email, chat, social media, news offering, games etc. — are only available by letting the providers have access to your data. Since data is so valuable, providers are constantly discovering clever ways for luring consumers into their services in exchange for their data. AI will be an additional boost in making most services easier to use and deliver higher value, encouraging users to share more data. We believe that effective regulation can eventually come in place to protect privacy better than today. However, this will remain a challenge.

3.6. Feeling Lonely

Computers, smartphones and social media have already started a trend of people (especially young) interacting more with their devices and services than with other people in the room. Computers do not talk back, are not mean, and can be switched off. That makes it easier for some people to deal with computers than with real people. As a result, some people fail to develop, or lose, the skill of starting, developing, and keeping up a conversation that is interesting, fun and rewarding. They feel safer with their devices. A lack of people-interaction and dialog increases the feeling of loneliness in our society. It takes much greater effort then to make connections — often, only with the help of services like dating sites. AI can increase this effect because it can create much more interesting and immersive experiences with devices and services. Will that make us more and more drawn away from human contact — into the arms of AI and robots? We know that a healthy balance between human contact and device interaction is possible, but it needs effort and discipline. It is not easy to predict how any given individual will strike that balance. We believe that most people will manage to develop a healthy balance between the warm and emotional human contact and the fun and adventure that the AI devices and services offer.

What Can We Do to Address These Concerns?

First, we must acknowledge the concerns people express about the emerging world of AI and continue a healthy open conversation on the topic to remain aware of the way our society is managing the transition to automation and be ready to take corrective actions. Decisions about the future should not be dominated by fear, but by an open awareness of the risks and dangers and regulating the evolution of AI use cases that ensure a focus on “doing good” with AI and minimizing abuse and negative effects on the human society. That is our challenge as humans.

For many of us who believe that technologies like AI will benefit all of us in so many areas of our lives, the question that comes up again and again is what can we really do to effectively address all these concerns? We would like to ensure that AI should be used only for the good of the society and humanity.

Technology by itself is neutral and is a capability available to everyone — with good and bad outcomes. Technologies are like weapons that can be used by tyrants and by the oppressed. Technology is used or abused by humans and hence the human abusers of technology should bear responsibility for the abuse — not the technology. Unfortunately a few people (some politicians, businesses, dictators, espionage organizations, etc.) with evil objectives have amazing resources to leverage modern technologies for their selfish and greedy interests — e.g. power, dominance, and profits. The challenge is how to win against those who have better resources to leverage technology for bad. For example, Facebook is a very useful social media technology application that has connected billions of people, allowing them to share their views and life events drawing them closer to each other. However, the same technology has been leveraged by Cambridge Analytica to divide and target people with tailored propaganda in order to ensure victory for their paying clients by manipulating democratic elections and political referendum.

The challenge for us, wanting “technology for good,” is how to win against the few, but powerful “technology for bad” folks. Leveraging data to manipulate in the case of Facebook is just the tip of the iceberg since the economy of the future is based on using data. AI and Internet of Things (IoT) run on data — enormous amount of data. Hence, the scope for manipulating data by the abusers will rise significantly. This makes the challenge for us, who want to eliminate or at least minimize the abuse of technology, even more daunting and critical.

This responsibility for ensuring checks and balances to manage the scope of AI is for our entire society including all involved players — businesses, government, politicians, NGOs, education institutions, and consumers. Today a handful of businesses, like Google, Facebook and Amazon, have amassed most personal data of billions of individuals — including most of us. Can they be trusted with our data? Will their business interests lead them to compromise our data — even inadvertently? With the progress of AI many more companies will be holding our data on their platforms. Businesses will have to adapt to their new responsibility to the society. As very well articulated by Dov Seidman, CEO of LRN “the business of business is no longer just business. The business of business is now society.” Therefore, businesses must be made responsible for what technology enables on their platforms. Ethical standards for managing public data will have to become regulated, requiring strict compliance and auditing, just like in the medical and finance industries. Cyber-security has already become a major requirement for all data operations.

What AI and machines should do, rather than what they can do, must become the new focus. We need to lay down a framework for how companies deal with data. Microsoft has articulated a set of principles for AI that cover:

· Privacy and Security

· Safety and reliability

· Fairness — ethical (preventing bias)

· Inclusion

· Transparency

· Accountability

Many countries have woken up to the challenge of digitalization and its impact on the society. The politicians have started to recognize the opportunities and risks. There is a strong need to fund systems like education and training that will enable countries to make the best of the new opportunities with AI and also have legislation to mitigate the risks. This will require the governmental apparatus to really understand the real potential of AI — which is a significant challenge. The 28-year-old Canadian whistle-blower Christopher Wylie said in the case of Cambridge Analytica that explaining to British government officials the way data works was very challenging due to their lack of background. He said, “I have had to explain and re-explain and re-explain and re-explain…

Unfortunately, politicians and governments can themselves easily abuse the power of AI to control and monitor their citizens — as it is already happening in some centrally managed countries. China is apparently using face recognition AI technology in public places to catch and fine jaywalkers. They also use AI to monitor citizens to create a social ranking for rewards or punishment (https://www.wired.com/story/age-of-social-credit). We, the citizens, must bear responsibility for keeping businesses and politicians honest and aligned with norms of using AI for the good of mankind.

Ensuring that AI is mostly for the good of humanity is a global challenge and needs teamwork of many involved players — businesses, government, politicians, NGO institutions, educationists, and consumers. It is a challenge that we must take on.

It is only appropriate to end with a statement in March 2018 by French President Macron: “If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution. That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale. The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.” https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/

*****

For more on AI: AI&U — Translating Artificial Intelligence into Business

https://www.amazon.com/AI-Translating-Artificial-Intelligence-Business/dp/1521717206/

For even more on AI: Generative AI — The Future of Everything

https://www.amazon.com/dp/B0CKW8L7LH

Contact: Sharad Gandhi, Christian Ehl

Read our other articles:

· Artificial Intelligence — Demystified

· AI is Essentially “Artificial Perception

· Data — The Fuel for Artificial Intelligence

· How AI machines learn — just like humans

· Start with Artificial Intelligence right now!

· Artificial Intelligence Boosts Better Decision-Making

--

--

Sharad Gandhi

Technology philosopher and strategist for creating business value with digital technologies