A New Politics of Innovation
We’ve seen tremendous advances in science and technology. Now we need to debate their ethics.
In our modern, dynamic world, there are always new challenges coming our way. I’ve been exploring some of the issues that will shape the future of our politics. Now I turn to upcoming debates about the ethics of science and technology.
The past century has been one of the most productive periods in medicine. Our ability to prevent and treat disease has drastically improved, and life expectancy has significantly risen. We are learning more powerful ways of manipulating biology to learn about the body and to develop new treatments. But with power comes responsibility.
Medicine has always faced difficult ethical dilemmas. In most cases, they have to do with trade-offs between the benefits and the risks of a certain treatment. For example, doctors have been able to keep patients alive on ventilators and bypass machines, sometimes for long periods of time. But at what cost? When does the suffering outweigh any benefits? This kind of dilemma is difficult, but is normally handled by doctors and their patients on a case-by-case basis. It won’t necessarily invite any big public debates or policy changes.
But now there is a new kind of dilemma that should get public attention: the ethics of gene editing. Due to developments like CRISPR, lab researchers can accurately edit the genomes of many organisms, including humans, and they can do so at very low cost. Not too long ago, the technique was used for the first time in the US on a human embryo. Many researchers dream of curing genetic diseases simply by editing them out. This is extremely exciting for science, but it’s uncomfortable for the public. Is gene editing ethical? Are we allowed to modify our children’s genes in order to give them a better life? Are we allowed to edit them at will, or only for certain reasons? If there is some threshold, where should it be?
The answers we choose could have enormous implications for our future. Gene editing could become a life-saving treatment. It could be a new way to control our children’s traits. The most extreme case might resemble the test-tube babies of Brave New World.
Since it’s so impactful, the issue should be discussed in public. It could be liberals who advocate for greater use of gene editing, and conservatives who argue against it on ethical grounds. Perhaps, before entering mainstream politics, the issue will spark large movements (remember GMOs?). Right now, there isn’t much discussion and there certainly isn’t an answer. But we should start discussing this as soon as possible.
Perhaps the biggest invention in recent times has been the internet. We can now communicate nearly instantly, covering enormous distances and reaching great numbers of people at once. Huge amounts of information circulate. But a lot of the information is private, posing a big privacy and security challenge. The public is starting to worry that their data can be shared too easily, whether by tech companies, the government, or malicious attackers.
The problem is worsening, but we have yet to make real progress. Consumers of internet products often ignore the fact that their behavior will be monitored. Perhaps this is because we don’t know when we’re being monitored, or who or what is watching. But we have seen multiple prominent examples of sensitive information being taken — Snowden’s NSA documents and the activities they detailed, Russia’s cyber-attacks during the election, and most recently, the Equifax breach — and yet there isn’t much being done to respond.
Internet privacy is hard because the internet is not like real life. In real life, we aren’t concerned about companies keeping track of their clients and recognizing us when we come in the door. When they do, those companies have no need to share information with anyone else. And these cases are already the exceptions. When we walk around in public, strangers normally have no idea who we are. If anyone learns substantial information about us without asking, it’s because they’re stalking us, which is illegal. In contrast, everyone on the internet is identifiable through their IP address (except when using Tor or a similar network). We are walking ID numbers; all anyone has to do is keep track. Since being observant was never a crime before, it’s hard to call it one now.
This is only the tip of the iceberg. In a lawless place like the internet, how do we enforce the rules? Should the government be trusted with wide powers of surveillance, hopefully to protect our national security, or should it be subject to the same laws as the rest of us? Is surveillance a constitutional or legal issue, or just an ethical one?
We need to start talking about this. One way to bring it up is to focus on the national security issue, especially given recent relations with Russia. It’s one thing for a company to compile data, and quite another for foreign officials or criminals to steal it. Do we really want all our internet services collecting huge volumes of user data, ready to be stolen by anyone with the right set of skills? Maybe this threat will get discussion going. Either way, the issue can’t stay in the dark for long.
If internet has been our biggest invention so far, it may soon be eclipsed by AI. And once again, technological progress will lead to unprecedented ethical issues. In this case, they will be about power: as AI learns increasingly impressive skills, we will need to decide who should wield its powers — and how much control to cede to the technology itself.
Like the others, this issue is tougher because of its uncertainty. Most people agree that AI is vastly more powerful than previous technologies, and may even outdo the human brain. But that’s all we can agree on. Researchers argue about when AI might surpass human intelligence, how it will do so, and whether such a change is beneficial, or devastating, or somewhere in between. We seem to be a long way from the existence of superhuman AI, but plenty of people are already panicking at the thought of it.
It isn’t surprising that so many companies are working towards superhuman AI. It’s in line with our normal approach to technology, which is to design it to help as much as it can. If there is a faster way, we make it. If there is an easier way, we make it. Naturally, once there is a way that’s even better than a human, our instinct is to make it. And unless we reach a consensus that it’s a bad idea, which we haven’t, it will probably be allowed to happen.
But there are real reasons to regulate or even avoid superhuman AI. If someone successfully created one, they would likely apply it to complex decision making, maybe in a field like medicine. (This is the apparent goal of some of IBM’s Watson technologies). But medical decisions carry moral weight. If the AI were superhuman, would it become so advanced that it couldn’t explain its decisions to a human? Would it be compatible with human ethics at all? It isn’t responsible to create such a technology unless it could justify its decisions in human terms — and at that point, the technology may not perform much better than a human.
Real discussions about this are a long way off, but we can begin by clarifying where we’re going. It’s useful to think through the hypotheticals and consider possible consequences, as many people are doing. But what’s more urgently important is to settle on a common goal. What is it that we want from AI — and what kind of AI can really offer it?
This is part of a series on upcoming issues in politics. Read my comments on climate change and the national debt here.