In Estonia, planning for life alongside robots

The Baltic nation is a leader in electronic voting and cybersecurity. Now, it’s poised to become the go-to source for the law and artificial intelligence.

Eric Niiler
CXO Magazine
8 min readNov 22, 2017

--

The Estonian robotics startup Starship Technologies has tested its delivery bots in 100 cities, including in Estonia’s capital city, Tallinn. Image courtesy of Starship Technologies

Estonia has plenty of bona fides that put it at the vanguard of tech. In 2005, it became the first country to recognize internet voting. In 2011, it established a high-water mark for digital defense by raising an all-volunteer army of cybersecurity watchdogs. Now, as the Baltic nation begins to explore legal rights for robots, the world will take note — and probably follow.

Officials there are starting to develop regulations that someday might give robots controlled by artificial intelligence legal status somewhere between a piece of property and a human. The rules could establish liability when things go wrong with bots. For instance, what happens if a driverless car goes awry? Who’s to blame? And what if an intelligent computer gives the wrong medical diagnosis? Is the hospital at fault?

These are the kinds of questions that’ll be coming up as businesses and governments rely more on robotics and AI to perform everyday tasks and make decisions that can have life-altering consequences. Already, robots exist in most facets of life. The International Federation of Robotics predicts the number of robots in factories worldwide will grow from about 1.8 million today to 3 million in 2020. The organization also expects the sale of so-called “service robots,” or bots that perform tasks in restaurants or hotels, will increase 12 percent this year to $5.2 billion.

Earlier this year, Estonia marked another technological first by granting the robotics startup Starship Technologies the right to let its cute, boxy delivery bots roam the streets of Tallinn, the country’s capital, delivering takeout and groceries. While Virginia passed legislation to allow robots such as Starship’s droids to operate across the state, Estonia is the first European Union country to let the company’s robots navigate its sidewalks alongside pedestrians. One of the original developers of the Estonian company Skype, Ahti Heinla, co-founded Starship in 2014. Since then, the company has tested its bots in about 100 cities worldwide.

Image courtesy of Starship Technologies

In an April 2017 Fortune magazine story about Estonia’s commitment to pushing the envelope when it comes to innovation, Heinla said, “If you look at sci-fi movies set 20 years from now, you don’t see people carrying their groceries. Robots just arrive at their homes. … About two years ago, we realized it was possible to create this part of the future right now.”

The vast majority of robots in operation today aren’t delivering food nor are they intelligent enough to make decisions on their own, but the growing investment (about $1.5 billion in 2016) in smarter robotic technology points to a not-too-distant future in which much of the planet is living among a lot more bots. So, soon after Estonian officials legalized self-driving cars this past spring, they began thinking about what that could mean for society at large.

“We decided not to work on traffic laws only, but also the legalization of artificial intelligence in general,” says Marten Kaevats, digital advisor to the Estonian prime minister. “We want to have a discussion within the society so that we agree on the rules of engagement of AI liability. We want to have everyone who is a nonspecialist understand the issue better.”

That’s the kind of conversation that’s long overdue in the U.S., especially when it comes to business, transportation, and healthcare, says David Danks, professor of philosophy and psychology at Carnegie Mellon University in Pittsburgh. “If anything, we are five years behind. We need to be putting things into place now, so we can properly regulate the technologies as they are emerging.”

Danks says the federal government should regulate each type of device that uses AI, rather than developing one set of laws to govern the use of the technology. “It has to be specific,” he says. “We know enough now as a community to design some intelligent AI frameworks that won’t stifle innovation, but that support the social good.”

Here’s where it may matter most to have rules governing AI and robotics:

Health care

Consider radiology. Many hospitals currently use AI algorithms to screen X-rays and pick out medical conditions such as bone fractures or pulmonary embolisms, according to Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania.

At first, he was skeptical about the use of the technology. Then he met some computer scientists working on the software program that analyzed X-rays. “By feeding the computer loads and loads of X-rays, the computer could flag the difference between fracture and not a fracture despite not being taught where it was,” says Jha. But even though he’s impressed by the machine’s smarts, he still believes that doctors need to have questions about liability answered before the techniques become more widespread.

He also wants to be sure he can trust the computer without fear of malpractice litigation. “It’s only going to work if the AI is going to be given some autonomy,” says Jha. “If the AI is like a medical student whose work I have to oversee, it’s not going to make my life easier.”

At an American Medical Association meeting in May on AI and healthcare, Christopher Khoury, vice president of the organization’s environmental intelligence and strategic analytics unit, said smart machines also need to endure more clinical trials before widespread adoptions should take place. “Physicians want to know that there’s clinical and analytical validity to any kind of new tool or platform they might be adopting.”

Autonomous vehicles

The most public debate on AI is happening on the road. Ride-hailing giant Uber is testing driverless taxis in Pittsburgh, Google spinoff Waymo is testing vehicles on the streets of Silicon Valley, and GM and Tesla are working on autonomous freight trucks. All these vehicles use AI to allow the car to make driving decisions without a human operator.

With no federal rules on autonomous vehicles, states and cities are left to fill the gap, according to Matt Scherer, a Seattle-based attorney and expert in AI law and policy. Both houses of Congress are considering legislation called the Self-Drive Act to exempt driverless cars from some existing regulations — like having a steering wheel, for example.

Tesla recently released images of its prototype electric semi-truck. Image courtesy of Tesla Motors

Scherer says that given Congress’s lack of action on any legislation, he expects that individual state and federal judges will be forced to set standards for liability and other issues that come up through the legal system.

Jay Hietpas, director of the Minnesota Department of Transportation autonomous bus pilot project, recently told the MinnPost that lawmakers will have to reconsider everything from car insurance to driving age limits in the era of self-driving vehicles. “Who can get a driver’s license — can a 10-year-old operate an automated vehicle many years down the road?”

Hiring

Many U.S. companies are already using AI algorithms to recruit new workers, sort through resumés, and even conduct performance reviews. “There is a great temptation to rely on AI to take a first pass at resumés,” says Scherer, the Seattle attorney. “It’s such a time-consuming process for recruiters.”

In fact, Scherer represents several companies that are deploying AI algorithms to speed up recruiting. Still, he points out, they need to be wary not to overlook qualified candidates who don’t have the right test scores or didn’t attend the right college. “The problem is that humans have blind faith in machines’ ability to do things correctly to a worrisome degree,” he says. “What I have seen is using these analytical HR programs that leverage AI or use Big Data to assess applications, they tend to score resumés at face value.”

That becomes especially troubling if scores are based on metrics such as previous salaries that can inadvertently create a policy of discrimination.

“When AI and recruiting come together thoughtfully and ethically, they can encourage better candidate fits, promote fairer interview screening, and increase overall efficiency,” Dipayan Ghosh, a fellow at New America and a research affiliate at Harvard University, recently wrote in Quartz. “But we must also be mindful of the specter of harms like algorithmic discrimination and implicit harmful bias in AI-enabled recruiting, and do our best to counter them. Anything less is unfair to the people whose livelihoods are at stake.”

‘Law of kratt’

Back in Estonia, lawmakers are borrowing a term from the past to help inform citizens about the future. To get more people interested in the proposed rules, regulators are calling the AI proposal the “law of kratt,” named for a mythological figure of Estonian folklore. The kratt is a scarecrowlike figure created by country folk, who bring it to life with three drops of blood during a midnight pact with the devil. The kratt then becomes its owner’s slave.

“You always have to give it a job,” says Kaevats, the Estonian digital advisor. “Whenever the kratt became smarter, it came and killed you.” When it comes to artificial intelligence and robotics, he says, it’s better to have laws in place before AI becomes fully “alive.” After conducting a series of public hearings on the rules over the next year and a half, he expects the country will release its new laws of kratt in March 2019, when a new parliament is elected.

CXO Magazine is a new publication founded at Northeastern University to chart the ideas, events, and people shaping the future of work. Learn more.

--

--

Eric Niiler
CXO Magazine

Science writer @WIRED, science writing teacher at Johns Hopkins. Ski instructor living near Washington, DC. Love Antarctica, bikes and Eesti. @eniiler