Artificial Intelligence and Biotechnology: Risks and Opportunities
Policymakers may not be prepared for the impact artificial intelligence and biotechnology could have together.
Tens of thousands of Americans live with the debilitating pain of sickle cell disease. Sufferers say it can feel like being stabbed repeatedly, like having broken glass flowing through their veins. An announcement this past December finally gave them a reason for hope.
For the first time ever, federal regulators approved a procedure to edit human genes for the treatment of disease — theirs. The procedure makes one small change to relieve a genetic glitch that warps their blood cells into destructive sickles. Experts hailed it as an example of what’s possible as scientists learn to manipulate the very building blocks of life. “This is the first mile of a marathon,” one doctor told Scientific American.
Another fast-rising technology could turn it into a sprint. Machine learning is already helping scientists make sense of the genetic keys that could unlock new crops, new drugs and vaccines — or new viruses. But a recent RAND study warned that policymakers may not be fully prepared for the impact those two fields could have together. To make the most of the opportunities to come, and to avoid the dangers, that has to change.
“These fields are both accelerating and transforming the way we do things,” said Sana Zakaria, a research leader at RAND Europe. “What happens when they combine? Do they grow into something bigger and better? Or something worse?”
Genetic engineering has given us corn that repels caterpillars, rice that resists blight, and chickens that can fend off disease. Recent advances in gene editing — especially the tool known as CRISPR, used in the sickle cell treatment — now allow scientists to operate directly on strands of DNA. Machine learning, meanwhile, can devour huge amounts of genetic data and point the way to new medical breakthroughs and better understanding of human health. The two technologies together hold the promise of a future in which our own bodies attack cancer cells and food crops thrive in the heat of a changing climate.
That’s the glass-half-full view. The careful-what’s-in-that-glass view might point to what happened a few years ago at a small U.S. drug company.
The company was using a machine learning model to search for molecules that could be used to treat rare diseases. It had trained a computer to screen out molecules that might be toxic. As an experiment for an international security conference, and with safeguards in place, it flipped the rules. It asked the computer to identify harmful molecules, not screen them out. Within six hours, the computer had generated 40,000 candidates. Some were known chemical warfare agents. Some appeared to be even more lethal.
For policymakers, that’s the challenge: How do you push open the door for new crops and cancer treatments, without leaving it open for a computer-generated catastrophe?
How do you push open the door for new crops and cancer treatments, without leaving it open for a computer-generated catastrophe?
“There’s this fear with machine learning and artificial intelligence that it could become this monster that takes over,” said Timothy Marler, a senior research engineer at RAND. “The same happens with gene editing. There’s a lot of discussion about the risks these technologies could pose, really existential risks. That’s necessary — but it can also overshadow the opportunities. There has to be a balance here.”
RAND’s team looked at how China, the United States, the United Kingdom, and the European Union are approaching the confluence of gene editing and machine learning. They found strict regulations and outright bans in some countries on genetic engineering and genetically modified organisms. They found a growing push to establish some guardrails around machine learning. But where the two fields meet, they found, policymakers are just beginning to take action.
Late last year, for example, the White House issued an executive order calling for much stronger safeguards on artificial intelligence and synthetic biology. The World Health Organization has also stepped in, calling for greater efforts to prevent unsafe or unethical experiments in gene editing.
That’s a start. But what’s really needed is an international effort to establish some common rules and norms. “If the U.S. and the UK and China are all coming up with their own policies, they’re only going to be as good as the weakest link,” Zakaria said.
If the U.S. and the UK and China are all coming up with their own policies, they’re only going to be as good as the weakest link.
“Let’s say, hypothetically, that the UK says, ‘We’re not going to do any gene editing.’ People will just say, ‘Great, I’ll go to Singapore, then, or I’ll go to China.’ If you don’t have a sense of international collaboration, you’re just opening the door for people to go elsewhere and do what they want to do.”
Developments here are moving much faster than policy. In 2018, for example, a Chinese scientist announced that he had changed the DNA of twin baby girls in an effort to make them more resistant to HIV. His announcement was met with global condemnation. The official Xinhua news agency in China described it as “extremely nasty.” But without adequate policies in place, the Chinese government had to fall back on a temporary ban on all human gene-editing work. That halted good research as well as bad.
To be more effective, policymakers need to familiarize themselves with two forbiddingly complex technologies. They need to better understand what’s possible — and what will be possible — with both machine learning and gene editing. And then they need to develop policies that are anticipatory, participatory, and nimble, the researchers wrote.
The participatory part of that is especially important. Public perceptions can make or break a new technology — just ask anyone trying to sell genetically modified food in Europe, where many countries ban it. The convergence of machine learning and gene editing could represent a “seismic shift” in fields ranging from agriculture to medicine to national security, researchers wrote. Successfully navigating that shift is going to require the participation of an informed and engaged public.
Machine learning has the potential to truly unlock genetic editing capabilities and uses in everyday life, RAND’s study concluded. But that is going to require years of hard work to minimize the risks and maximize the opportunities — to balance restriction and regulation. With world-changing technologies, that’s always the case. “Cars kill tens of thousands of people every year,” Marler said. “That doesn’t mean they don’t also provide significant benefits, and even help save lives.”
This originally appeared on The RAND Blog on March 21, 2024.