Drawing the Ethical Line on Weaponized Deep Learning Research

Photo by freestocks.org on Unsplash

Good AI and Deep Learning researchers place a lot of passion into their work. Although they may rarely reflect on the ethical consequences of their creations, there is a belief and even an expectation that whatever they create will lead to the greater good. Even if that idea of a ‘greater good’ can be a nebulous idea that perhaps means “leave the world in a better place”.

For those who have spent considerable time contemplating the consequences of AI (or Artificial General Intelligence), the end game scenario that leads to a ‘better place’ (and not the alternative apocalyptic scenario) reveals many divergent futures. One would think humans will have a consensus on the “better place” future. Jurgen Schmidhuber’s “better place” is a place where you have an advanced intelligent species very different from humans. Elon Musk’s “better place” are human cyborgs in a world of AI. Ray Kurzweil’s “better place” are populated with immortal humans. Gene Roddenberry’s “better place” is a world without money.

A majority of humans are genetically predisposed to strive for the common good. This is likely through natural selection as a consequence of the existence of civilization. Civilization has that curious side-effect of ensuring that bad actors are exterminated or placed in the ‘dust bin’ of history. Society is relatively free from chaos despite the reality that act of destruction is relatively easier to perform than the act of creation. Our ambition to make a mark on history is subservient to the need to do good. We don’t need to go into the ethics about this, since this is all about our biology. George Lakoff (author of Philosophy of the Flesh) remarked that Philosophy is meaningless without taking into consideration our own biology:

In short, philosophical theories are largely the product of the hidden hand of the cognitive unconscious.

We collectively have an “intuitive” understanding of what the greater good means despite actually having divergent views of what that actually means. Let’s, for the purpose of discussion, make this an axiom that we don’t need to explore further.

Alfred Nobel invented dynamite, but he is most well known for the Nobel prize. On his death, the press wrote the headline “The merchant of death is dead” and wrote that “Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday.” Prior to his death, Nobel wrote a will that a majority of his wealth would go into a trust that lead to the funding of the Nobel prize. (Appears that he read his obituary while he was still alive). No AI researcher would relish to have an obituary like Nobel’s.

Almost every technology can be a two edged sword (or have a “dual use”). A recently released 100 page report on the malicious use of AI writes:

Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

This report explicitly recommends that an AI researcher as part of their due diligence also explore the potential misuse of their creations and to proactively communicate (to those who need to know, and not everyone) the foreseeable harmful effects of their creations. It demands that researchers are fully aware of the immediate consequence of their work. This is a good responsibility to bake into any Hippocratic oath for AI research (i.e. “Do no harm”). Oren Etzioni proposed the following version of the Hippocratic Oath for AI researchers:

I swear to fulfill, to the best of my ability and judgment, this covenant:
I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
I will apply, for the benefit of the humanity, all measures required, avoiding those twin traps of over-optimism and uniformed pessimism.
I will remember that there is an art to AI as well as science, and that human concerns outweigh technological ones.
Most especially must I tread with care in matters of life and death. If it is given me to save a life using AI, all thanks. But it may also be within AI’s power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty and the limitations of AI. Above all, I must not play at God nor let my technology do so.
I will respect the privacy of humans for their personal data are not disclosed to AI systems so that the world may know.
I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality.
My AI will prevent harm whenever it can, for prevention is preferable to cure.
My AI will seek to collaborate with people for the greater good, rather than usurp the human role and supplant them.

One key part of this oath is that a researcher must be aware if his research can lead to AI that can ‘take a life’ (Should this be confined to human?). One can contribute to military technology, but it should be that of saving human lives. Shall that therefore be the line to be drawn in military research? That military research should strive towards defensive AI and never offensive AI?

The A.I. community now has a growing activism against any A.I. research that is related to weaponized A.I. Researchers have threatened to boycott KAIST (a South Korean university) for their AI work with a military contractor. KAIST however has quickly responded with this threat with following argument:

[the research lab had] no intention to engage in development of lethal autonomous weapons systems and killer robots.
the research lab would instead be using artificial intelligence for navigating large unmanned underseas vehicles, training ‘smart’ aircraft, recognising and tracking objects and “AI-based command and decision systems”.

The problem with the KAIST argument is that it does not explicitly avoid offensive weaponry. Although the boundaries between a defensive weapon and an offensive weapon can be blurry, I would expect that it should be the responsibility of every AI researcher to explain the offensive capabilities of their research. It should not be for outsiders to figure this one out.

As an example, defensive weaponry such as the Patriot missile system that is able to recognize and track incoming ballistics appears to be ethical within the scope of the above Hippocratic oath. This is despite a Patriot missile having explosives and thus being weaponized. However, AI driven navigation of an undersea vehicle is problematic if that undersea vehicle has offensive capabilities. Working in a military establishment by itself should not be regarded as an unethical endeavor from the perspective of AI research. However, knowing that your AI research leads directly to an offensive capability may perhaps be clearly unethical. It is similar to that ethical question of whether a doctor should be directly responsible for euthanasia or corporal punishment.

This brings up the question of Google’s participation in the Maven project of the US military. Many in Google are upset about this and have petitioned the company to divest itself of the project (See comments for the many sides of this debate). The petition demands that:

Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.

Is all warfare technology something that is off limits for lethal automation? Is the analysis of imaging for potential targeting by a UAV within the bounds of the above Hippocratic oath?

Researchers must be able to explore warfare technology of the defensive kind. I would rather see defensive AI technology having superiority over offensive AI technology. We cannot avoid other nation states from developing their own offensive AI technology. However we should not discourage AI researchers from exploring defensive technologies. There are many medical researchers as part of the military. Their work doesn’t violate the Hippocratic oath that they have sworn to uphold, their research is focused on saving lives. In the same vein, AI researchers employed by the military should not be ‘persona non-grata’ simply because of their association with ‘warfare technology’.

An AI boycott of any “warfare technology” is likely to be extremely effective in dissuading top research institutions and researcher from participating. Unfortunately, the broadness of the term “warfare technology” is going to have a detrimental effect on our own security. No AI researcher would ever want the title of “The merchant of death” in their obituary. Our society needs further debate on this subject before it degenerates in an all consuming lynch mob.

Further Reading

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution