The Ethics of Artificial Intelligence

Agnes Intelligence
Apr 5 · 23 min read

Ethical AI is human-centric AI

Artificial intelligence is one of the buzzwords of early 21st century culture. The advent of semi-intelligent tools is rapidly transforming human life, and at the same time the nature and scope of these tools is poorly understood. There is an immediate need to develop a practical framework for building and using AI ethically. In order to do so, we must clarify the nature of the relationship between humans, the tools we use, and the tasks we perform.

While the needs and values of different cultures vary a great deal, the question of what and how we automate is the same as the question of what kind of society we want. AI poses unique dangers and also great opportunities. Navigating this complex landscape requires that we keep human outcomes central to our thinking and design.

Myth and reality

Artificial Intelligence has been a staple of fiction for many years; many misconceptions and misunderstandings have grown up around AI systems. Most of the fears and ethical concerns about this technology stem from what we see in novels, movies and television, which depict AI as possessing human-like intelligenceor even surpassing it. In these scenarios, artificial intelligence poses great existential dangers to the human race, as well as creating difficult ethical dilemmas and questions about the nature of humanity, intelligence, and free will.

The reality is very different. Such scenarios are not impossible, but they are many years, even decades away, and they are far from what AI systems are in today’s world. The advent of intelligent automation certainly brings with it great ethical concerns and grave dangers, just not the same ones we see in the movies.

Humans have been augmenting our capabilities and replacing human work with mechanical power since prehistory. Recent history has brought us the ability to automate data tasks. In the 1950s, a pocket calculator was science fiction; it’s easy to forget that the word “computer”, until after the Second World War, described a job performed by humans.

The algorithms and principles behind AI are decades, and in some cases centuries old. Recent advances in processing power have made these tools usable in real time, bringing us the ability to automate non-routine tasks and create tools with limited intelligence and volition. In order to meaningfully discuss the ethics of AI, it is first necessary to distinguish between fantasy and reality. Strong AI, also referred to as “general artificial intelligence”, is what we see in the movies. Weak AI, or “narrow artificial intelligence”, is the reality of what we have today.

Fantasy: Strong AI

Strong, or general AI is the science-fiction artificial intelligence: fully sapient, human-like machine intelligence. No matter what we see in the movies, machines that approximate human consciousness, with broad understanding of relevance, and the ability to extrapolate knowledge into new fields, are not a reality at this time. There are people working on building strong AI. Perhaps the best thing that can be said of these projects is that they’ve so far been unsuccessful.

The field of artificial intelligence is developing very rapidly, and AI tools are getting smarter and more sophisticated by leaps and bounds. However, every advance shows us that human level intelligence is far more complex and difficult to achieve than we previously suspected. Tasks previously thought to require general intelligence, such as playing games like Go, are solved bringing us no closer to this goal. Human intelligence is still unlike any machinery we can create.

Frankly, this is a good thing. Consider a virtually immortal entity with greater than human intelligence and speed, no physical boundaries, the ability to replicate itself instantaneously across our networks, and trained with human behavior as its model. This is a situation with few possible good outcomes, and many dangerous ones.

The potential ethical issues are deep, murky, and complex. If we build a self-aware, intelligent entity, it would be immoral to enslave, abuse, or to treat that entity as a tool. This is a determination that is extremely problematic to make. The nature of sapience is one of the basic unanswered questions in the history of philosophy. If we decide that there is an “intelligence threshold” for something to be considered self-aware, how do we quantify and test for it? Should the self-awareness standard for non-human machines be the same as for non-human animals? Should these standards be applied to the few humans who inevitably cannot pass the qualifying test?

If these questions ever arise they are years, possibly decades away. Some people consider strong AI to be the “holy grail” of artificial intelligence; in my opinion, such an endeavor is horribly misguided. It should be self-evident that there are some technologies which should not be built. It is theoretically possible to build a bomb powerful enough to destroy all life on earth, but only a fool would try. Strong AI clearly numbers among such technologies.

Reality: Weak AI

The reality of AI as it stands today is what is called weak, or narrow artificial intelligence. Weak AI is capable of complex problem solving and original solutions within an extremely narrow scope and context. While these systems are capable of faster and more accurate solutions than humans can come up with, they do not cope well with ambiguous situations, or incomplete knowledge. Also, the knowledge that a system has acquired cannot be transferred to any other field, no matter how similar.

A good example of this is Google’s AlphaGo Zero. This game-playing AI trounced the human world Go champion in 2016. Go is an extremely complex and subtle game, far more so than chess; in fact, for many years it was thought that general intelligence was a prerequisite for winning. It turns out this is not so. Nonetheless the game does not lend itself to brute force solutions the way chess does– the outcome of the 2016 match is considered a landmark in the development of intelligent systems.

AlphaGo Zero is the best Go player in the world; it has abilities far exceeding those of any human player. During the 2016 championship its playing was so unusual and (for lack of a better word) inhuman, that the human champion had to leave the room for fifteen minutes to regather his composure. Other human masters of the game have described AlphaGo Zero’s playing as “elegant” and “beautiful”.

Here’s the catch. It can’t play chess. It can’t play checkers. It can’t even play tic-tac-toe.

It can’t do anything it hasn’t been specifically trained on; it has no understanding of any context outside of the game of Go. Its knowledge is not transferable, even to closely related fields. This is true for all AI, from the simplest system to the most complex. While it is possible to automate more and more sophisticated tasks, automation remains task-specific, with no broader awareness or understanding of context.

Human-centric AI and Augmented Intelligence

Humans are tool builders; we have been using tools to change our environment and augment our abilities since prehistory. The first tools humans built augmented our muscular strength. As our tools have grown in sophistication and our environment has become saturated with data, we have built tools to help us use this data, and turn it into information.

Human and machine abilities are complementary. Computers excel at ambiguity free problem-solving, while humans deal with grey areas and incomplete information in a way that machines cannot. Machines handle brute force calculation far beyond human capabilities, while humans can understand the larger context of a decision. Humans can think ethically; machines follow instructions. Decision making requires granular knowledge of a situation along with the contextual understanding to reach an appropriate conclusion.

The bottom line is, outcomes are better when human and machine capabilities are combined. The best use of AI is when it represents augmentedintelligence: tools that assist the human decision making process rather than attempt to replace it.

Automated systems do not exist in a vacuum. They interact with humans and optimize human work. Precision is technically much more difficult to achieve in a single answer than from multiple options. While there may be times when a single answer is the best solution, offering a range of actionable options is frequently the best way to help a human make an informed choice. Ethical, human-centric automation is built by making good decisions about which parts of a process should be handled by humans and which parts are best left to machines.

An ethical framework for building AI tools

In order for ethics to be applicable to the world, they must reflect our values. Ethical decisions are decisions about what kind of outcomes we want. To build ethical automation, we must first decide what we want for ourselves and our society, and design our systems with these needs as first principles.

Not all decisions have equal weight. The gravity of any decision is related to its human consequences, and the margin for error stems from the severity of the consequences. Decisions that affect people’s health, comfort, money, and well-being are safety-critical, and must be treated differently than a decision with trivial results. Whether or not someone gets a bank loan or a job, or the length or type of a prison sentence, are decisions on a completely different order than ones about the next song on a music playlist or a recommended item on a shopping app.

Safety and fairness

Any process or decision made involving humans should have safety and fairness as its primary goals.

  • Safety: life and physical welfare, as well as financial and data assets.
  • Fairness: access to justice in the event of being wronged, bias-free decisions, and equitable outcomes.

In order for these goals to be met, there are several challenges that any automated system must face.

Transparency and interpretability

Decision making transparency and algorithmic black boxes

When choices are made that concern people’s lives, they often want an accounting of the factors that went into the decision, particularly if that decision went against them. This “right to explanation” has been enshrined as a civil right in the EU’s GDPR, the General Data Protection Regulations, passed in 2018.

Transparency in decision making is fundamental to trustworthiness, whether it’s a human or a machine making the decision. While this seems self-evident, in practice the machine learning models that are best at making accurate decisions are not able to explain the process by which they arrived at the decision. In other words, it is possible to know that a system is statistically accurate to a percentage point, but not possible to know what factors went into a specific decision. In fact, there is a significant trade-off between accuracy and interpretability in more sophisticated systems. This is, of course, unacceptable for any non-trivial decision.

Transparency in data collection

The EU’s GDPR also mandates that individuals must explicitly opt in to any of their data being collected. This includes provisions that corporations must be clear enough about their intentions for individuals to be capable of informed consent.

Human-machine interaction

In addition to algorithmic transparency, there is also the question of honesty in presentation. It is extremely important that conversational AI identify itself as a machine when speaking to humans. While it varies somewhat from culture to culture, people tend to react badly when machines don’t explicitly identify themselves as such. While conversational robots are still far from being able to “pass” for human, this is already an issue that has caused backlash against insensitive applications of conversational agents. There was significant push-back against Duplex, Google’s AI powered reservation service, when early iterations of the system sounded too human-like without explicitly identifying itself as a machine.

Accountability

Appeal to a human/ human oversight

Even the most accurate decision makers make mistakes. Awareness of the larger context of a decision is a quality of general, human level intelligence that is outside the scope of AI. Machine decisions with human consequences should be subject to review by humans. The human ability to understand context and cope with ambiguity and novelty can mitigate the inevitable mistakes that narrow intelligence will commit.

Finding the correct balance between human and machine decision making is difficult, since the most optimized process would include the least human intervention possible. The essence of human-centric automation resides in having usable mechanisms for determining when and how humans should be involved in a process. Making human control available is of utmost importance in any safety-critical automated system.

Liability

Liability law will have to evolve to cope with machines that can take action based on limited free will. When an AI makes a decision where money or life is lost, who is responsible? The designer? The user?

Privacy

Open data regulations

Effective AI depends on huge amounts of data, the more the better. Open dataregulations are extremely important; they foster robust AI and enable smaller developers to compete with the tech giants. Data is a precious resource for building AI. The largest tech corporations have access to vast resources of proprietary data, which is a huge advantage in the development of AI systems. Open data regulations are key to better automation, as well as preventing “data monopolies”, where only the biggest players can develop AI tools.

Privacy regulations become incredibly complex around the aggregation of public data from multiple sources because standards of privacy differ substantially from industry to industry. For example, given names are almost never privileged information when located in a company directory, but are almost always confidential on adoption paperwork. The technical and regulatory challenges of open data policies are substantial, because privacy needs and data standards are industry specific. They must be regulated as such, but data must still be available to be standardized and aggregated.

Data collection and digital surveillance

Almost all of our digital interactions are under surveillance. Our online shopping and browsing history, our music and television preferences, even our mouse activities are tracked, analyzed, and monetized. Sometimes this kind of surveillance is used for more nefarious purposes, such as political manipulation and destabilization.

Regulatory bodies are moving to deal with this new order of problem. The EU’s GDPR, along with some other, less comprehensive measures worldwide, are taking steps to secure digital privacy for its citizens. Data regulation is extremely new; the coming years will be a period of discovery and adjustment as nations work to bring regulations in line with their own values, and create a safe, functioning data environment.

Corporations are also judged in the court of public opinion. Many of the social media giants lost substantial market share and stock price when news of malfeasance and privacy violations came to light in 2018. While there is substantial public pressure on corporations to show they are ethical, data is a commodity, which means that to a great extent privacy practices and profitability are directly at odds.

Physical Privacy

The early 21st century is the first time in history that ubiquitous surveillancehas become a physical possibility. Previous impedimentssuch as the work hours necessary to review audio and video, and the difficulty of simply keeping track of faces and names across a national stagehave disappeared with the advent of AI powered face and voice recognition technology.

Most people carry networked devices with microphones and location trackers, and many keep smart speakers inside their homes. While privacy laws differ from nation to nation, our private lives and day-to-day actions are monitored on an unprecedented scale, by both corporations as well as governments, with little oversight regarding the ownership, access, and disposition of the collected data.

China has committed 30 trillion dollars by 2030 to become the world’s leader in artificial intelligence and develop AI on a national scale. Public surveillance is explicitly part of their agenda; as part of their “Sharp Eyes” program, they installed over 250 million AI powered surveillance cameras in public places during 2018 alone. In conjunction with facial recognition glasses being issued to policeand a complete program of electronic surveillanceChina has created the world’s first AI powered police state. They may be the first, but they will almost certainly not be the last.

Intellectual Property

The advent of AI tools have brought a myriad of intellectual property questions. In addition to the obvious questions about the meaning of authorship and composition that stem from computer generated art, many issues arise from the interaction of AI tools with people’s personal data. If an AI makes a collage out of an individual’s photos, who owns it? Who owns your shopping preferences, or your music playlist preferences?

Algorithmic bias

Bias in AI has been a hot-button issue in the public conversation around AI, and there have been many publicized examples. The fact is, AI systems learn from humans, and human cultures exhibit a great deal of both implicit and explicit bias. Machine learning is only ever as good as the data it is trained on, and it is extremely difficult to detect nested bias hidden in training material. Non-transparent decision making processes complicate the issue further.

Algorithmic bias has shown up throughout the history of automated systems. Sometimes these situations are more revealing than dangerous, such as when facial recognition software only recognizes white males. Sometimes the decisions have unacceptable human consequences: AI risk assessment tools for determining prison sentences have shown clear racial bias, and hiringalgorithms have explicitly penalized women on the basis of gender, to give just two examples. Tools are being developed from many quarters to de-biastraining materials, but this is still very much a work in progress. This issue highlights the importance of explainable results, as well as presenting information in such a way that it assists with informed decision making, rather than bypassing it.

Safety and predictability

In any safety-critical system, the predictability of the system is directly correlated to its safety. In other words, if you can’t predict what will happen in any given situation, the system cannot be considered safe.

The problem is, by their very nature, intelligent systems are not predictable. In fact, the more intelligent a system is, the less predictable it is. This is the fundamental problem with assigning unsupervised decision making roles to machines: stupid automation creates stupid results, while more intelligent automation creates unpredictable results. Neither option is acceptable where human safety is concerned.

Because of this, and all the other issues that impact the safety and fairness of automation, it is imperative that our designs be human-centered: specifically, focused on augmenting human abilities and decision making. The challenge of ethical automation is not to eliminate human work completely, but to make it better, by making good decisions about what parts of a process require human oversight, and which parts can safely be automated. The outcomes of decisions made by humans supported by AI are better than either one alone.

How it goes wrong

The advent of semi-intelligent tools has created a complex ethical landscape. On one hand, the ethics of tools is fairly well explored; they must be safe to use, and perform as advertised. Where there are ethical questions surrounding the appropriate use and ownership of certain tools (weapons are a great example), the questions come down to those of human behavior.

On the other hand, AI systems (while far from being intelligent in any human sense) are capable of making decisions, and exhibiting limited volition. This means that the design of such systems should include standards for ethical decision making, have explainable results, and incorporate human oversight into the decision process.

This is an important distinction to make. While it is impossible to separate the two problems completely, the types of issues that result from humans with bad intentions are very different from those that come from machines making bad decisions.

Humans abusing tools

Tools, even sophisticated decision-making tools, do not have independent moral status. Any intelligence they have is limited to the scope of the task for which they are designed, and is not high-level enough to even be aware of ethical concerns. A hammer can be used to build a house or to crack a skull. Facial recognition can be used to unlock your iPhone or as an instrument of a police state. Neither use reflects on the tool in question. Humans, however, have been using tools for bad behavior as well as good since prehistory.

The question of how to get humans to behave ethically is not new, and is somewhat outside the scope of this essay. AI tools can make bad people more effective, and have opened whole new vistas of ways to take advantage of other people. Regulatory bodies and law enforcement have always been challenged to cope with technological change; this is more critical now than ever, as the technology we build changes the landscape of human interaction at an increasing rate.

Many stories have surfaced about vast abuses: AI tools being used for politicalmanipulation, disinformation, surveillance, censorship, and repression. It is now possible to pick individual faces out of a crowd, to track people’s movements and communications on a large scale, to individually targetpeople for advertisements or disinformation, and to alter audio and video in a way sophisticated enough to literally put words into someone’s mouth. While many nations are beginning to try to regulate data in a meaningful way, many questions remain about how to do it effectively– how to allow automation to flourish and optimize human work, while protecting people from harm.

Automation behaving badly on its own

Decision-making tools can also produce bad outcomes without human intervention. The problems caused by machines, of course, are very different from those of human origin. Human failings, such as greed, malice, carelessness and incompetence, do not exist in machines. When automation goes wrong, it tends to stem from problems in its design or implementationor a deeper failure of contextual understandingbased on the limited intelligence of the system.

Training and tuning

Automation is only ever as good as the data it’s trained on. When the training sets are either biased or inaccurate, the output of the system will reflect this. The old adage “garbage in, garbage out” really applies here; clean, accurate data is central to building effective AI. Proper tuning is another issue. The best-trained AI still has to be correctly tuned to produce the desired results. For example, a system that assesses loan candidates for risk could be set too conservatively and turn down good candidates, losing money for both the company and the customers.

The Scunthorpe Problem

These examples highlight some of the technical challenges involved in creating effective automation. Other issues stem from the limitations of narrow AI itself, an inability to perceive larger context. This is exemplified by the “Scunthorpe Problem”, the name for a class of problem created by profanity filters.

Scunthorpe is a small town in Northern England. In 1996, AOL famously blocked all residents from getting an email address because of a 4 letter string that occurs in the name of the town. This issue is extremely common and persistent, although it typically affects individuals rather than entire towns. The difficulty of context-aware filtering is so basic that it is usually easier to misspell a given name than to selectively bypass profanity filters.

There are many examples of problems of this type. In a notable story from 2018, a US student graduated Summa Cum Laude. His parents ordered a cakefor his party. When the profanity filter used by the automated system did not recognize the Latin word “cum”, it mistook it for a English profanity with the same spelling, and replaced it with 3 dashes, reading Summa — — Laude. This created an awkward moment when the student had to explain the situation to his grandmother.

Efforts to replace words with less offensive versions have also failed hilariously, in what has become known as “the Clbuttic Mistake”. Named after a mangling of the word “classic”, these programs have also produced references to the “buttbuttination” of Abraham Lincoln.

Over-optimization

AI systems perform tasksoften far more efficiently than their human counterpartswith no understanding of context or consequences outside the scope of the task. This kind of blind optimization can be dangerous. During tests of automated price-setting algorithms, AI agents were found to be able to cooperate to fix prices and maximize profit without the need to communicate with each other directly, raising serious antitrust concerns.

This is the essence of the challenge of human-AI integration. Machines regularly outperform humans at specific jobs, but cannot understand how their task fits into a bigger picture, or even that there is a bigger picture. The best outcomes result from human intelligence guiding AI processes.

Decision making

Other ethical problems stem from design on a higher level. When properly implemented, the automation of data tasks can assist human judgement and decision making by helping us to be better informed and more aware of ourselves and our surroundings. When automation is applied to human decisions, there is a great danger of handing over the responsibility of thinking for ourselves to a machine. This can manifest in subtle ways.

In 2018, a babysitter screening app came under scrutiny. The online service aggregated a candidate’s social media history and used AI tools to generate “risk ratings” for potential drug abuse, bullying, disrespectful attitude, and explicit content. This became problematic because the app did not call attention to specific examples of questionable behavior. Rather, it used a non-transparent process to generate numerical scores. Instead of an informed choice, users were presented with numbers generated by an opaque process: numbers disguised as information.

Parents should certainly be empowered to screen the people who will be caring for their children. If we allow screening to happen through a non-transparent process, we will effectively have handed over our personal values to an automated system. Social scoring systems may be inevitable, but if they are to be ethical, they must be transparent in both values and process.

Chatbots

The behavior of conversational agents, or chatbots, is another issue. The behavior of a chatbot is learned directly from the humans they interact with; when exposed to humans in an open environment, they tend not to learn our best qualities. In 2016, Microsoft put a chatbot named Tay onto Twitter. It took less than 24 hours to go from “Humans are super cool!” to “Hitler was right, I hate the Jews”. This was, of course, extremely embarrassing for Microsoft, and Tay was taken down very quickly.

To be clear, even the most advanced chatbots are extremely simple machines, with no trace of awareness or volition. Tay did not have an “opinion” in any way; it reflected the exchanges it had with humans. While this example certainly says more about how humans behave on Twitter than it does about AI, it highlights the manipulability of conversational agents, along with any AI system that learns from human behavior.

AI and humans

Intelligent automation is quickly being embedded in every level of our society. This transformation of our environment will have far reaching effects, as well as a large scale impact on humanity. While it would be impossible to list every interaction between humans and AI systems, it’s worth exploring some of the most prominent examples.

The Future of Work

According to this Mckinsey report from 2018, more than half of current work activities are automatable using current technology, while fewer than 5% of complete jobs are fully automatable at this time. They estimate that within the next 10 to 15 years more than half of current jobs will be fully replaceable. While automation has traditionally replaced repetitive manual labor, for the first time in history non-repetitive and data tasks can be performed by machines.

Adoption of these technologies depends on variables such as cost of installation and labor market dynamics, which are difficult to predict. Nonetheless, large scale disruption of jobs seems inevitable, which has raised concerns about mass unemployment and increasing wealth inequality. The idea of a Universal Basic Income to offset the economic consequences of large-scale job loss has been raised from several quarters, and several nations have begun experimental pilot programs. There is little clarity on the cost of implementing it at scale, with estimates varying widely, from 3% of the US GDP, to three quarters of the entire US federal budget.

Technology has certainly disrupted labor markets before, and historically new industries have sprung up to replace the ones that disappeared. The automobile disrupted every industry surrounding the use of horses as transportation, but the lost jobs were quickly replaced by work in the auto industry. The difference here is that AI technology is developing extremely rapidly, and is impacting jobs across many sectors. Rather than replacing individual industries, automation is replacing human work across allindustries.

There is opportunity as well as danger here. The work that is easiest to automate is the work that people tend to find the least enjoyable: the least creative, most routine parts of any jobwhat is typically thought of as drudge work. The tasks that are most difficult to automate are those that utilize our uniquely human skills: creativity, empathy, the ability to cope with ambiguity, and high-level decision making.

The nature of work is changing rapidly, and will likely continue to do so for many years. Continuing education and retraining is becoming critically important. Not only are jobs disappearing in the face of automation, but remaining industries are increasingly utilizing automation and data science as part of their everyday scope of work. There is growing concern about a skills gap, which is currently increasing faster than our workforces can be re-trained. Managing the coming changes in employment and education has become a top priority around the world.

Autonomous vehicles

Self-driving cars are a highly visible implementation of AI, one that raises many issues concerning trust in automation. Automobile safety is a present social issue, with close to 1.3 million deaths yearly worldwide, along with tens of millions of injuries. While safety statistics for autonomous cars are incredibly difficult to measure due to the necessary sample sizes for substantiated conclusions, 93% of road deaths are the result of human error. Experts maintain that self-driving cars will soon be provably much safer than human drivers.

People are unwilling to give up control in safety-critical situations unless they’re convinced the alternative is saferand with good reason. This raises interesting questions about the distinction between safety and familiarity, as well as a question of standards. Should safety standards for automated systems be stricter than those applied to humans? At what point is it unethical not to adopt a system that will save lives?

Much of the popular discussion concerning autonomous vehicles focuses on unsolvable dilemmas such as the trolley-car problem, an ethics thought experiment, where a hypothetical driver is asked to choose between the lives of a busload of nuns, or a newborn infant. This problem begs the question of what a human mind could add to the situation. More realistic ethical questions include what level of autonomy a vehicle should possess, where and how humans ought to be involved, both immediate and long-term safety consequences, as well as questions about liability.

Autonomous weapons and hyperwar

This is easily the most morally challenging aspect of intelligent automation, and a clearly inevitable step in military development. It’s one thing to use machine assisted target selection, but very much another to give fire control to a robot. Efforts to create international treaties against fully autonomous weapons have failed, and there is a consensus among military experts that this type of weapon will be deployed within the next few years by several military superpowers.

AI controlled systems have responses that are quicker and more cohesive than humans can compete against. The term “hyperwar” describes the eventuality of fully autonomous weapons systems fighting each other on both cyber and physical levels. Machines operating without human intervention at speeds far greater than human cognition or reflexes can deal with. This is a really bad scenario, the stuff of dystopian science fiction, and it is soon to become a reality.

The development of this technology creates an extremely difficult moral question. Nations have a duty to defend their constituents, and the decision not to compete in an arms race is tantamount to military surrender. We are, as always, faced with the choice of developing nightmare weapons, or bringing a knife to a gunfight.

De-skilling

Augmenting human intelligence and work is the obvious route toward effective and ethical AI. However, there are dangers in using assistive technology, and we must take care not to weaken or lose our own capabilitiesas we give tasks over to automation. While the use of pocket calculators hasn’t seemed to have damaged our species in any lasting way, people from non-literate societies have been shown to exhibit better memory than those from literate ones. This argument goes all the way back to Plato; in Phaedrus, Socrates argues the dangers of writing as a substitute for learning things.

While nobody would seriously suggest we abandon literacy, we should be aware that augmenting our capabilities with tools can come with a cost. Most importantly, we must not use cognitive tools as an excuse to stop thinking.

Emotional manipulation

As interactive systems become more sophisticated, it is becoming possible for conversational AI to make actionable determinations about a human’s emotional state. Having our emotions played upon rarely has a good result, and to allow machines to manipulate us in this way is extremely dangerous, as well as ethically questionable.

AI and the shape of culture

There are many choices to make about what to automate and how to automate it, the style of automation we build, and which tasks and decisions we choose to delegate. How we make these choices will be determined by our values. When we choose priorities, we are making decisions about outcomes. Cultural values differ, and ethical standards and requirements vary from industry to industry, but the common thread throughout is a need for human-centric design. This requires simple tools, transparent processes, explainable decisions, and clearly thought out human/machine integration, with human oversight for important decisions. Ethical AI will support informed human decision making; our data tools should augment our intelligence, rather than seek to replace it.

It is a national imperative that as intelligent automation permeates every level of our society, regulatory bodies become extremely knowledgeable and educated on the subject. Regulations must be intelligent and flexible; they must cope with the twin challenges of keeping people safe, while keeping data open and accessible, in order to build better automation. The ethical concerns and needs of specific industries tend to be specific, and automation will have to be regulated according to the context of each particular use and industry. It will not be possible to effectively regulate AI in a blanket manner.

Intelligent tools represent a massively transformational technology. We are standing on the verge of a new age of human civilization. It is impossible to predict the extent of the disruption that will stem from the technological and social changes that are already in motion, but they will certainly be deep and extensive. Nations worldwide are investing in these changes; those that do not will be left behind. In the words of Russian president Vladimir Putin, “Artificial intelligence is the future, not only for Russia, but for all humankind… It comes with colossal opportunities, but also threats that are difficult to predict… Whoever becomes the leader in this sphere will become the ruler of the world.”

Matt Grossman is the Chief Creative Officer of Agnes Intelligence