Artificial Intelligence Is Dangerous — But Not the Way We Think It Is
Recently there’s been a lot of discussion about the future of artificial intelligence, especially with how it could lead to the demise of humanity. Science and technology leaders such as Stephen Hawking, Elon Musk, and Bill Gates have weighed in on the issue, saying that we should be more aware of a possible existential threat.
While it’s true that a hostile superintelligence could potentially destroy all of mankind, the current state of “artificial intelligence” — if the term can even be considered accurate — is far from reaching the singularity. Sure, AlphaGo can beat professional players at Go, but that’s the only thing it can do. Other programs can recognize faces or translate languages (badly), but like AlphaGo, those are the only things they can do. In other words, intelligent programs today are extremely task-specific — they can only do what they’re created to do, nothing more. Researchers aren’t even close to creating a general intelligence, which is the type of intelligence that people tend to think about when they hear the words “artificial intelligence” and starting worrying about a supercomputer ending the world. In fact, artificial intelligence research doesn’t even seem to be heading in the direction of developing general intelligence.
This brings me back to my gripe with the term “artificial intelligence.” “Intelligence” implies that computer programs are thinking in ways similar to humans, when they’re actually not. Even neural networks, which are vaguely inspired by the structure of our brains (emphasis on vaguely), more closely resemble heaps of statistics and linear algebra than, you know, neurons. For instance, here’s a common representation of neural networks:
Looks like neurons, right? But when you consider that the connections between the nodes are basically just the multiplying and adding of numbers from different rows and columns in matrix multiplication, it really should look more like this:
Doesn’t really look like a bunch of neurons anymore, does it? Once people understand “artificial intelligence” and its limitations from a more technical standpoint, the idea that it could become a superintelligence capable of destroying civilization seems a bit absurd.
That being said, artificial intelligence does carry with it plenty of possible dangers, even in its current state. Besides its potential for use in lethal autonomous weapons or automated phishing, some intriguing concerns involving artificial intelligence include deepfakes and adversarial attacks.
“Deepfakes,” a combination of “deep learning” and “fake,” are images and videos generated by neural networks. Current neural networks are able to synthesize extremely realistic faces of people who don’t actually exist, like in the examples below.
However, similar networks can be used to superimpose faces and actions on to existing pictures and videos to synthesize realistic fakes. While you once had to spend tedious hours on Photoshop to create one edited photograph, neural networks enable you to make those same edits on videos with much less work and even convincingly synthesize audio. This can have disastrous consequences — videos of politicians and celebrities saying and doing things that they have never said or done could easily be created to spread false information, and women have already been harassed by fake pornographic videos with their faces grafted onto them. Eventually, we might not be able to trust the information presented in videos unless there are guaranteed ways to detect deepfakes.
Meanwhile, intelligent systems can also be fooled by humans. Adversarial attacks are intentional adjustments to input data that can be made by hackers to trick a neural network. For example, in the image below, the image on the left is obviously a panda, as is the image on the right, which appears to us as exactly the same as the image on the left but actually has some pixel values unnoticeably shifted. Hypothetically, an image classification network should label both images as “panda.” But the extremely minor adjustments cause the network to instead label the picture on the right “gibbon.”
Typically, such minor changes would not affect the performance of the network, but what matters is that they sometimes do. Hackers can train neural networks themselves to discover these exploits and such attacks are extremely difficult to catch because we can’t tell the difference, meaning that all networks that process image data are vulnerable to this security risk.
So, while we might not see artificial intelligence kill off all of humanity (at least, not anytime soon), problems like deepfakes and adversarial attacks are two of numerous risks brought about by the advance of intelligent technology.
Sources:
https://qz.com/1213524/ai-experts-list-the-real-dangers-of-artificial-intelligence/
http://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/?utm_term=.7affa15f93be
https://blog.openai.com/adversarial-example-research/