Notes from Web Summit 2017 part one: AI Safety Research

A
s with most tech conferences these days, and probably the years to come for the matter, the buzz is all about artificial intelligence. Maybe it’s because of the level of the attendee’s but I found that it’s quite difficult to have a meaningful discussion on the topic. There are many debates going on, people don’t speak the same language, and have a different meaning for the same terminology. On the positive, there is a lot of progress on the subject, on all axis. I like to process my thoughts by writing a couple of articles from the notes I made during the conference.

AI safety research.

The conference started with a surprise video by Stephen Hawking. “Shit is about to hit the fan”.

Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. — Stephen Hawking

It’s ironic isn’t it, hearing a computer voice warn you for artificial intelligence. It reminded me of the story about fictional ‘Omega-group’ that created an AGI called Prometheus. In one of the later chapters (in his book Life 3.0 he comes back to the story with multiple additional scenarios), Tegmark came up with scenarios how Prometheus broke free of his human creators by hacking computers. What if the speech computer of Hawking is hacked by an AGI?

Max Tegmark is the founder of the Institute of Life. He’s a physicist and AI researcher at MIT. Tegmark highlighted some parts from his new book Life 3.0 and eloquently explained the necessity of safety research. ‘We landed men on the moon, but we didn’t do it unprepared. It’s very dangerous to transport human beings to a location where we haven’t been before, with a spaceship full of highly inflammable rocket fuel. Many things could have gone wrong, but we were very well prepared for the task.’ He argues AI needs go through the same kind of scrutiny. Billions of dollars are currently invested in AI research, ranging from business to military needs. Safety research is only funded by a couple of small groups, probably not much more than 10 million dollars total.

Tegmark hosted many summits with the top scientists in the field. Based on surveys he did among these researchers he concluded that the majority feels human-level AI or AGI is somewhere between 30 and 50 years away. However, the most optimistic group, like Kurzweil for instance, estimate only 12 to 15 years. Hawking has an estimation between 20 and 30 years. In any case, the impact on society would be huge. With the current pace of cultural change and progress, especially with the current political climate we would unable to steer AI in the right direction and align the AI’s goals, with our goals. It might destroy humanity, or at least lead to enormous inequality. Even if AI’s goals completely aligns with our goals, how do we govern it? If AI quickly becomes much smarter than us, how do you know it still behaves in the way you like it to? Tegmark argues that the questions asked by AI are so complex and so hard to solve, that we better start early. We might need all the time we have. In the most positive scenario, it guides our development towards it in the meantime. To kickstart the research he, and many other top-notch researchers wrote this principal overview.

Personally, I think what they are doing at the Institute of Life is of great importance to humanity. And I completely agree that our current innovation climate is completely misaligned with safety research. This is something that governmental bodies need to solve quickly. Based on our current state of politics and the undermining of global institutions as the UN doesn’t help either. However, I think that the immediate dangers that are coming out of current technological possibilities are already a great threat to society. You don’t need AGI to destroy societies worldwide. We used to think of cyber warfare as mostly hacking our infrastructure and maybe military (or voting booths). It recently shifted towards hacking our conversation and discourse.

The Killer Bots campaign video, made to campaign for the banning of autonomous weapons by the UN.

If you combine small drones like the Killer Bots concept with the face recognition software to see for instance if someone is gay or heterosexual you made a very low key tool to start ethnic cleansing. If you combine that with the quick improvements in semi-automatic content generation, and real-time face capture and reenactment it will be much easier to deceive and destroy democratic societies. I believe Donald Trump and Brexit are just the beginning of a string of many assaults on democracy. AI is far from perfect at this moment. It’s easy to fool, and very hard to get completely right. The datasets are suboptimal, labeling is usually mediocre at best, and our understanding of AI capabilities don’t even scratch the surface. The reality is, you don’t need to have anything near perfect AI to have a big impact on the world. Take spreading fake news. It’s so easy to create, and increasingly more and more complex to spot. See current examples, as this one involving a Muslim woman that supposedly ignored victims of the London terror attack. Imagine what would happen if this would be enhanced with the technologies stated above. People believe badly written and completely absurd reports just because it reinforces their own beliefs. It might have started as benign content creation for capitalistic incentives, but it’s impact on society is hard to deny.

The current political climate in the United States shows the impact of highly inaccurate and polarised news.

An elaborate report by Freedom House shows the current impact of the very basic tools that are currently being used.

I do share the optimism about the possibilities to improve the world based on AI. Ironically, I think AI is our best if not only option to solve the issues that technology in general, and AI in specific is creating. If you for instance look at the amount of content we create, it is much more than we can ever fact-check.

Yes, AI has great potential to solve our most dire problems. Eradicating disease, reverse climate change, automatically fact-checking all our information and potentially even peace. With a basic income, our lives could be one long Burning Men session. Where we have all the time and resources in the world for self-expression. But the vibe on Web Summit was way too optimistic. Yes, most of these problems can be solved with AI too. That’s the good news, but it seems that the problems need to escalate to epic proportions before we take them seriously. The fact that we can is absolutely no guarantee that we will. If the huge amount of data is not enough to convince people of the seriousness of climate change, how can we prevent things that move much faster? We know about climate change for decades now. We don’t have that amount of time with AI. Put in a different way, it’s not that hard to imagine how a new Hitler rises to power and creates a massacre if we don’t put very serious security measures in place to protect democracy and societies at large. We need it fast. And we need our best intelligence, human and artificial, to put all their energy into solving these problems.

Tegmark derives optimism from the fact that we were able to stop the escalation of bio-warfare and to some extent nuclear warfare before. And while not at the much-needed scale, most of the companies at the AI frontline devote some of their energy to putting AI to use for the common good. They do seem all very aware of the dangerous side of AI. And many of it’s best researchers have signed the principles document of the Future of Life institute. Google claims that it would like to do their part, and sees an active role for tech giants in making the world safer with technology.

“I don’t think it’s fair to ask the government to solve all these problems, the tech industry has a responsibility” — Eric Schmidt, Executive Chairman Alphabet

Google Jigsaw (fka Google Ideas) is Alphabet’s experimental body with the mission “How can technology make people in the world safer?”. CEO Jared Cohen had a presentation at Web Summit with the central question ‘How to prevent a cyberwar?’. Cohen is quite a realist towards technological impact. Which is refreshing between a crowd of techno-optimists. By 2020 there will be more connected devices than human beings on earth. That’s good news. But it means that we also become more and more vulnerable to cyber warfare.

All wars will begin at cyberwars — Jared Cohen, CEO Google Jigsaw

ISIS is the world’s most effective recruiter ever. The majority of this recruitment process (and employer branding if you will) happened online. Trump is another example. The future of cyberwar include these three angles:
- Physical, misinformation campaigns and destabilize infrastructure
- State-sponsored terrorism, for instance, patriotic trolling. Cyberbullying of political opponents can be better organized than ever before. 
- Paramilitaries, governments that try to hack the conversation by creating false narratives and false personalities. Even if it only leads to confusion over the true it is already somewhat successful.

Cohen is optimistic about the future if enough people will create tech for the common good. Jigsaw made some very useful tools already. For instance the perspective API. This machine learning model helps to categorise how toxic comments are via an API. You can feed it any set of texts you like. This enables news websites, for instance, to host discussions without being overtaken by bot networks and paramilitaries that are spreading misinformation.

Example of Jigsaw’s Perspective AI

On this specific topic, 9 ‘experts’ give their opinion on how to solve the fake news problem and malicious propaganda in general, in a New York Times article that is worth reading.

Also related, but not from Web Summit are these two articles “AI Could Set Us Back 100 Years When It Comes to How We Consume News” and “AI could help reporters dig into grassroots issues once more.

A lot of things ended up in a race between good and bad. But with AI there is a very good argument to be made, that it ends up in a zero-sum game, and the winner takes all.