The Rise of Artificial Intelligence: Five Things You Should Know
Artificial Intelligence is changing the world as we know it. Find out five key insights from the World Summit AI that brought together tech companies, academics and start-ups in Amsterdam.
(1) Some of the hype around AI is overblown.
Elon Musk, CEO of SpaceX, warns that AI is a ‘fundamental risk to the existence of human civilization’ but headlines about the rise of the machines and robot takeovers are not based in reality yet. Cognitive scientist Gary Marcus explains that the method of deep learning, in which AI imitates the human brain to process data, is a bit of a misnomer:
‘Deep learning is a marketing term — it’s not really deep and it’s no substitute for deep understanding. It can only label a scene, not interpret it…Children are smarter than any deep learning or AI.’
The notion that science fiction has been misleading was a recurring sentiment among experts at the summit. As Cassie Kozyrkov, chief decision scientist at Google put it, ‘Robots are another kind of pet rock — they don’t think at all.’
‘A lot of the AI capability is not there yet,’ says Emily Taylor, associate fellow at Chatham House’s International Security department. ‘The days where you will have a humanoid robot who is indistinguishable from a normal person are still a long way away. It’s quite hard and possibly pointless to try and recreate a human.’
Should you find yourself in a robot attack, Gary Marcus has some advice to throw them off — simply close the door, climb stairs or speak in a loud room with a foreign accent so they can’t understand you.
(2) Gender equality in AI has a long way to go.
Only 22 per cent of all AI professionals globally are women. Considering AI contributed $2 trillion to the global economy in 2018, and could add as much as $15 trillion to the global GDP by 2030, making sure women aren’t left behind by this fast-changing economy is crucial.
Ecem Yilmazhaliloglu, diversity advocate and founder of Technoladies, explained a few reasons why women can be disadvantaged in pursuing an AI career: there are fewer female role models in the field, a lack of opportunities and girls often aren’t given an early introduction to tech.
So is the solution for diversity simply creating more opportunities for women in AI?
It’s more complicated than that, argues privacy and data protection expert Ivana Bartoletti:
‘AI is more than just technology and diversity is also needed where decisions about how we use AI are made.’
There are opportunities to ‘right the wrongs of the past with the Fourth Industrial Revolution — and to do so en masse,’ says mathematician Anne-Marie Imafidon in an interview with Gitika Bhardwaj. ‘At the moment, we’re still working out the most ethical way to do this where we are countering the biases in the datasets that we’re building all of our algorithms on.’
(3) Women are leading in the ethics of AI.
As Ivana Bartoletti put it, ‘You can have the most amazing algorithm, and you can demonstrate that you followed due process, but you might still be using it for the wrong reasons. This is where the ethical debate comes in.’
A recent example from Austria highlights this bias problem when an employment agency used an algorithm that discriminates against women. According to the NGO AlgorithmWatch, a female candidate was more likely to be given a lower score than a male candidate, even if she had the same qualifications and experience.
‘If it’s a homogenous group of people building the technology, it’s quite difficult to have that unbiased mindset. Bringing subject matter experts, anthropologists, people who understand society, ethicists and lawyers into that process is important.’
Should people have a right to a ‘human in the loop’ when computers and algorithms make important decisions about their lives? This is the next major debate in AI ethics and there will likely be a court case about this in the near future, says Ivana Bartoletti.
There is a gender imbalance in the field of AI, but women like Ivana Bartoletti, as well as Safiya Noble, author of Algorithms of Oppression and Emily Taylor, editor of the Journal of Cyber Policy, are leading conversations about its ethical challenges.
(4) Autonomous weapons bring grave consequences for human rights.
Stuart Russell, associate fellow at Chatham House and professor of computer science at Berkeley, showed a short film created by campaigners to illustrate the dangers of autonomous weapons that can kill human targets without supervision:
‘I’ve worked in AI for more than 35 years. Its potential to benefit humanity is enormous, even in defence, but allowing machines to choose to kill humans will be devastating to our security and freedom.’
According to a 2018 Chatham House report on AI and international affairs, engineers have not been able to develop the technology needed for military robots to employ reason in high-stakes situations. This is because human reason is still very difficult for computers to replicate.
Autonomous weapons of mass destruction have the potential to be far worse than nuclear weapons, warns Russell: ‘We should have been worrying about this 10 years ago.’
Russell told the audience about a Turkish company manufacturing weaponized drones with facial recognition and tracking to be used against Kurdish forces in northern Syria. ‘This film is more than just speculation. It shows the result of integrating technologies that we already have.’
‘We have an opportunity to prevent the future you just saw — but the window to act is closing fast.’
(5) Ultimately, the future of AI is up to us.
AI can help solve climate change, find a cure for cancer, understand the human brain and explore space. But these breakthroughs are a long way away. As neuroscientist Gary Marcus explains, current AI systems only understand statistics and not the real world.
AI has been created by humans so humans can decide how AI is used. ‘As we build tools that scale and reach more people, we must be careful,’ warns Google’s Cassie Kozyrkov. ‘The peril and the promise of AI is you don’t need to think as much.’
Stuart Russell asks, ‘How can AI advance the quality of human experience when it doesn’t know what that experience is? When you create super intelligent machinery that pursues an incorrect objective, you lose and they win.’
While commercial organizations develop new and exciting AI technology — think driverless cars and drones that can deliver packages — this rapid development can be a double-edged sword. According to research, governments seeing their best and brightest engineers move to the commercial sphere could lead to unsafe and compromised autonomous systems.
Gary Marcus is also cautious about relying on businesses to drive the future of AI: ‘The business world’s aims are not coordinated with what we want, rather they are driven by quarterly goals. We need government involvement to emphasize the kinds of AI the corporate world do not.’
‘We can’t put AI back in Pandora’s box — it’s here to stay.’