Graphic by Mike Petriano

The Ethics of Defense Technology Development: An Investor’s Perspective

Trae Stephens
14 min readDec 4, 2019

AA great deal of ink has been spilled on activism inside tech companies regarding the ethics of doing business with the defense community. Much of the current debate within the U.S. technology community tends to frame the issues in binary terms: technologists have a choice — to participate in the “business of war” (as a recent letter by Google employees put it) and become complicit in war’s consequences, or to withdraw from work with the defense community completely. The tech community’s ambivalence regarding the defense sector is understandable. In the post-Cold War geopolitical environment, our collective memory of conflict (to the extent there is one) has too often been characterized by recent experiences of failed nation-building, unmet aspirations for globalization, and the rise of populism.

The current debate frames a false choice, however. It is clear that emerging technologies will force a shift in the way wars are initiated, waged, resolved, and most importantly, deterred. The technology industry cannot divest from the defense sector without ceding ground to our adversaries, who are accelerating their investment in their own defense technologies. American technologists should not view these adversaries as our moral equivalent: Governments such as the Chinese Communist Party or Vladimir Putin’s Russia are operating illiberal and closed societies, and in the past decade, they have leveraged these same technologies to annex territories, interfere in global conflicts (and domestic elections), and systematically subjugate dissidents and minorities. These governments also require their technology industries to support their illiberal ambitions. This also makes America different — and better. American technologists are not required to work on behalf of their nation’s defense, but in choosing not to do so, they must recognize that they are ceding an advantage to illiberal rivals and putting the very freedom and openness that they cherish at risk.

The critical question is not whether the technology industry should participate in the defense sector, but how we do so ethically. This question has consumed me for more than a decade. I have used and built defense technologies, and in recent years, I have directed national security technology investments at a leading venture capital firm, Founders Fund, where the ethical standards of the products, companies, and founders that we invest in are critical considerations.

The overriding lesson that I have drawn from these experiences is that responsible technological investment in the defense sector is critical to our ability to deter unnecessary wars, and make the conduct of wars that we do wage more ethical and consistent with our values as a free and open society.

The technology of warfare, like any other technology, can be misused and abused. But it can also be designed and used to deter violence, to reduce casualties and the loss of life, to mitigate bias and misuse, to enforce accountability in the use of force, and, when violence is necessary, to make it quick and precise, leading to a rapid victory rather than an extended conflict. The technology industry can and should play a critical role in achieving these outcomes.

The role of technology in making warfare more ethical is an important topic that has not received the nuanced and substantive coverage it deserves outside of the defense sector. It is also a topic to which humans have given considerable thought through the ages. The ethics of war and some of the key frameworks from that literature can provide critical guidance for how we should think about new challenges posed by emerging technologies and how we should ethically develop, invest in, and utilize these capabilities for defense and national security purposes. Indeed, the principled framework of Just War Theory provides a structured roadmap for thinking about the ethical issues raised by the intersection of emerging technology and national defense.

Frameworks established by Just War Theory first emerged thousands of years ago and have been discussed and debated at great length by philosophers, historians, and theologians of nearly every faith tradition in the years since. Concepts emerging from this dialogue have been adopted and codified in national laws around rules of engagement, such as the US Law of War, in international treaties, such as the Geneva Conventions, and in the founding documents of international organizations, such as the UN Charter.

While a variety of concepts and naming conventions exist inside the Just War tradition, I will focus here on four organizing principles: The Principle of Last Resort, The Principles of Discrimination and Proportionality (grouped together), and The Principles of Just Authority & Right Intent. This overview is intended as a catalyst for further, and much needed, discussion on this issue within the technology and defense communities, rather than a definitive treatment of the questions.

The Principle of Last Resort

“What is the evil in War? Is it the death of some who will soon die in any case, that others may live in peaceful subjection? This is mere cowardly dislike, not any religious feeling. The real evils in war are love of violence, revengeful cruelty, fierce and implacable enmity, wild resistance, and the lust of power and such like.” (Against Faustus, St. Augustine of Hippo)

“Ohhhh. Great Warrior. Wars not make one great.” (Yoda, The Empire Strikes Back)

War should never be desired and the loss of any single human life is regrettable and should be avoided if at all possible. There are various mechanisms available to exercise this preference: diplomatic intervention, punitive economic policies (embargoes, tariffs), pressure from the international community and deterrence. Advanced technology can be a tremendous asset to avoiding and deterring unnecessary war.

For most of modern history, the threat of punishment imposed by one or more sovereigns for violations of that sovereignty, or grievous offense to moral or international law and norms, has served as a deterrent against bad action. This system worked reasonably well, but only in the case where a massive power imbalance existed.

The advent of nuclear weapons affected this historical calculus more than any other development to date. During the Cold War, punitive nuclear deterrence broadly assured that the world would not slip into global-scale conflict akin to the world wars. The costs were too high and the threat was believable. Proxy wars were waged, but the US and Soviet Union were generally mindful of — and avoided — any conflict that might incite nuclear war.

But deterrence provided by nuclear weapons has eroded greatly since the end of the Cold War. The threat of nuclear retaliation is not especially believable, particularly in a world in which numerous smaller powers have acquired their own nuclear capabilities and mutually assured destruction seems to be the most likely outcome of any significant engagement.

As part of this transition out of Cold War engagements, the United States began investing heavily in the development of conventional capabilities that could play a role in continued punitive deterrence, including precision-strike weapons, stealth technologies, and unmanned aircraft. This worked for a time, as can be seen by conflicts with the Taliban, Hussein, and Milosevic. But, over the past decade or so, the deterrent impact of America’s lead in conventional precision strike warfare has similarly eroded as China, Russia, and increasingly Iran and North Korea, have invested large sums of time and money into developing their own conventional capabilities to match ours. Innovative autocratic regimes have simultaneously also worked to adopt strategies to move quickly in deniable ways to achieve their objectives before the US and our allies can respond militarily. This tactic was clearly on display during Russia’s invasion and occupation of Georgia and Ukraine, which hardly evoked a response from the international community.

Technology development will play a critical deterrent role in a post-nuclear society, both as a neutralizer of nuclear and conventional threats as well as in shifting the cost of conflict to the aggressor.

There is a long history of investment in anti-ballistic missile programs by the US and the Soviet Union, most of which have had modest results. These and related technologies are a critical area of continued investment, if only to ensure that we maintain or exceed peace-preserving balance between nuclear powers that currently exists. Similarly, we need to be thinking critically about how to defend our military and our allies against large salvos of conventional munitions, unmanned aircraft, hypersonics, and the like. In the coming years, it will be incredibly important to develop improved early-warning systems, directed energy missile defense, hypervelocity defensive systems, sensor fusion and situational awareness, sense-making artificial intelligence, and automated command & control. For instance, the recent destruction of Saudi Oil facilities (reportedly by the Iranian regime) could have been deterred or avoided if US allies had effective counter-cruise missile, counter-drone, and related technology to stop the attack. These are technologies worth investing in.

These defensive technologies also increase the cost of conflict by making it significantly harder to inflict violence on the US or its allies without being caught or suffering significant losses in the process. One can imagine that the deployment of Russian troops to Ukraine, Crimea, Abkhazia, and South Ossetia, or the attack on Aramco would have been more difficult if our allies in the region were armed with more defensive capabilities — such as those mentioned above — which shift a greater share of the cost of the conflict on the invading force. This movement towards deterrence by denial rather than punishment is drawn out in great detail by Congressman Mike Gallagher in a recent piece in Washington Quarterly.

In similar fashion, numerous non-lethal and non-kinetic technologies have continued in development that allow for improved conflict de-escalation, thereby making physical violence even more of a last resort. These capabilities (such as electronic warfare and cyber) tend to be more covert and deniable as well, which can make it more possible for states to respond to aggression or deter it in advance through means that clearly convey intent to the aggressor but still allow for them to off-ramp and save face. As militaries become more defined by information technologies and AI, these kinds of capabilities will become all the more important in future war.

The Principles of Discrimination and Proportionality

Where the use of force is necessary, Just War requires that it be deployed ethically and consistently with the principles of discrimination and proportionality. Discrimination deals with the determination of who is considered a legitimate target in war and proportionality deals with the concern of how much force is morally appropriate. These two principles are often discussed separately, but in the case of evaluating in the context of technology development it makes sense to discuss them in concert with one another. Defense technology can play an important role in enforcing these principles.

Historians of war note that weapons development has been focused on increasing the size and scope of destruction:

One-to-one

Swords, knives, arrows

One-to-some

Cannons, trebuchet, catapult

One-to-many

Explosive ordinance, chemical and biological, nuclear

These early developments (i.e., transitioning from fists and knives to nuclear weapons) came largely at the expense of both discrimination and proportionality.

More recently, however, military technology development has focused significantly on a return to more precise and discriminate targeting:

One-to-some / some-to-some

Precision-fire weapons, long-range IR target-finding gunships, small diameter bombs, battlefield awareness software, high-resolution imaging, autonomous unmanned aerial systems, computer-guided counter-missile/rocket platforms, AI-powered warning systems

This represents a huge shift in warfare and defense. Technology has the potential to lead to a significant reduction in the loss of innocent life through highly precise (discriminating) and targeted attacks and for these surgical operations to reduce the need for massive indiscriminate (disproportionate) strikes.

The ethical use of AI in defense has been a source of ongoing debate in the national security community. This debate is critical and should be continued (my preliminary views on how the debate has been framed are expressed here and the Defense Innovation Board released a brilliant report diving into specifics around this which can be accessed here). While we must continue to pay close scrutiny to questions such as accountability and transparency in any military use of AI (and related questions regarding whether/when a human should be “in the loop” in such systems), we should also not lose sight of the ways in which AI can be deployed to make warfare more ethical. This shift is not wholly new. Continued improvement of AI for national security continues the decades-long trend toward precision weapons and will contribute to moving us away from the indiscriminate bombing campaigns that were the hallmark of warfare for the entirety of the 20th century.

Similarly, the use of autonomous robots or drones to investigate operational targets prior to a breaching event reduces the possibility for collateral damage to innocent civilians, while simultaneously protecting our men and women in the armed forces from unknown threats. And the development of next-generation Battlefield Management and Command & Control capabilities can allow for increased situational awareness for commanders in the field, augment human senses to provide better warnings and targeting for our soldiers, and help in preventing tragic friendly-fire accidents.

We need more investment in technology that confines warfare to combatants and reduces harm to civilian populations and property, not less. At the same time, it is also important to recognize that reduced cost (measured by human lives, natural resources, and/or financial resources) should not be used as an excuse or rationalization for increased frequency of action.

The Principles of Just Authority and Right Intent

“And thus the commonwealth comes by a power to set down what punishment shall belong to the several transgressions which they think worthy of it, committed amongst the members of that society, (which is the power of making laws) as well as it has the power to punish any injury done unto any of its members, by any one that is not of it, (which is the power of war and peace;) and all this for the preservation of the property of all the members of that society, as far as it is possible.” (Of Civil Government, John Locke)

Once it has been determined that the use of force is merited against an unjust adversary, Just War Theory dictates that a just authority must be invoked prior to engaging in combat. Technology can play an important role in facilitating Just Authority.

For instance, in recent years, autocratic regimes have increasingly used advanced information technology to achieve their ends. They have done so by using these technologies as an offensive tool — through cyber attacks, leaks, and misinformation campaigns — to harm our interests and those of our allies. They have also used technology defensively — through censorship, surveillance, and repression — to reinforce their own power, which can often only thrive inside of information vacuums they have intentionally created. Examples of the use of misinformation and censorship by these regimes includes the Russian denials of incursions into both Ukraine and Georgia, the Great Firewall in China, El Paquete Semanal in Cuba, truly remarkable anti-US propaganda in North Korea, and the long-standing nuclear shell game in Iran, among others.

While technology has been used as a powerful tool of repression, it can also be used by the US and its allies to counter information vacuums and establish ground truths in the face of misinformation. Better information makes it more difficult for repressive regimes to operate with impunity, and put the US and its allies in a better position to enforce multi-lateral, diplomatic, and non-military options and avoid unnecessary conflict. Critically, improved information also makes it more likely that US policymakers will make better and more ethical decisions regarding the use of force in any given circumstance.

For example, improved sensor and surveillance technology can make it less likely that we will enter a global conflict based on flawed intelligence. The same goes for technologies permitting effective forensic analysis of captured materials (including documents, photos, videos and hard drives). Similarly, the use of autonomous surveillance platforms, stealth aircraft, and advanced space-based surveillance allows the US and its allies to understand the actual threats posed by autocratic regimes, to minimize false positives/negatives, and to respond proportionally and effectively when real threats are detected.

Further, this emerging “information advantage” is forfeited if we don’t simultaneously develop capabilities to make sense of it all. Too often we fail to make use of this data because of old, slow, and ineffective means of analyzing the deluge of new information in order to enable our soldiers and leaders to make life and death decisions with the right information at the right time. This is a critical area of technology development that is deeply shared with the private sector.

Surveillance and forensic technologies, like all technologies discussed here, also carry a risk of misuse, particularly when deployed domestically, and their use must be monitored and regulated appropriately. But when used responsibly, they can and will prevent unnecessary conflict and when a military course of action is required, will assist in developing broader support from the international community (a key criteria for Just Authority in the modern era).

While there are significant policy questions relating to the use of AI in battle, Artificial Intelligence can also be deployed in ways that address concerns around Just Authority and Right Intent. A computer’s actions can be monitored and analyzed, its programmed biases corrected, and constraints and limits can be placed on its actions. In the best case, the role of AI in the battlefield will be to implement the terms of engagement set by humans — who are ultimately accountable for its actions — while allowing computers to make sure those terms are carried out systematically and precisely and in ways that limit and reduce unnecessary harm.

With that said, the principle of Just Authority depends critically on thoughtful policy and open communication around the development and use of defense technology. When democratically-governed people take action against our adversaries, we trust our elected representatives to make truly representative and just decisions. But state sovereignty is not “morality-free”. It is possible that some legal actions are immoral and that some moral actions are illegal.

Stated less generally, the United States in particular is not beyond reproach, and we should be constantly engaging in a dialogue around how to better ourselves and our institutions. The government we have in place allows for that bettering on a regular basis, locally as well as federally, via election cycles and baseline protections for all citizens. We also have an all-volunteer military who acts according to the guidance of our civilian elected government. These benefits of democratic governance should not be forgotten or taken for granted.

Notably, the very same technologies that would be used justly towards a better and more humane application of Just War theory could just as easily be used with bad intention and in bad faith stemming from a different ethical framework than what we are accustomed to. It is therefore critical, particularly in Western democratic societies, to lay in place clear policies and practices for the ways in which these technologies can be morally and justly used.

Conclusion: If you want peace, you must prepare for war

Many in Silicon Valley hold the mistaken belief that if they and their counterparts withdraw from defense or weapons work, they can force a stoppage and bring about a peaceful equilibrium. There is a fundamental consideration that has been too little covered in this debate, however: What are the moral consequences of societies rooted in a Just War tradition refusing to invest in sophisticated defense technologies while authoritarian regimes invest aggressively in their development?

It is common in the modern era to see new military technologies emerge prior to the advent of standards and norms to govern them. This has been the case with nuclear weapons, chemical and biological weapons, satellites and certain types of ballistic missiles. Instead, these standards and norms are reached after the fact, via agreement with the greatest powers who wield the technologies.

History has shown that those who lead in new technologies of defense will set the ethical norms and standards that govern those technologies. Those who lead from behind will be subject to those norms.

In debating the value of our investment in defense technologies, we cannot ignore the fact that if we allow others to build these technologies while we stand idle, we will lose the power to regulate their use, we will allow aggressive autocratic regimes to take the lead, we will voluntarily limit our power to deter harmful conduct (including genocide, repression, and interference with international norms), and we will cede to the most belligerent and authoritarian states the power to impose insidious legal and moral standards on the US and its allies without consequence.

This is happening presently inside of the United Nations’ International Telecommunications Union (ITU), where China is hard at work leading the development of regulatory standards for the use of facial recognition technologies, which they have already exported to dozens of countries around the world. This is not a hypothetical future threat — it is a real and present one.

It is critical that democratically-elected governments with a firm respect for Just War frameworks be the leaders in the development of these technologies and that we trust in the strength of our institutions to act justly. A just society must follow the ancient Roman adage: if you want peace, you must prepare for war. It must prepare its capabilities to deter the aggressor and maintain the balance that will keep the peace. And it must use its technological and military advantage to protect the values and freedoms we all hold dear, and to which so many nations aspire.

--

--

Trae Stephens

Trae Stephens is a Partner at Founders Fund, where he focuses on startups operating in the government space.