The Ethics of Inevitability
The Artifice of Artificial Intelligence
I wrote an article for Mashable a while back called, You Should be Afraid of Artificial Intelligence. It got more reads and comments than anything I’d written before. My main point, as I wrote to open the piece was, “I, for one, do not welcome our robot overlords.” I went on to quote a number of AI experts and tried to be objective regarding the fact that being afraid of AI doesn’t really help push the conversation forward around the ethics describing what might happen if and when machines reach human level sentience.
But if nobody else has, I’d like to give you permission to be afraid. Not just a skittish paranoia, but a deep and preternatural fear that’s relevant, justified, and doesn’t mean you hate technology. Nuclear power and its results in Hiroshima and Nagasaki, for one, provide a great precedent for warranted caution. The people creating the tech behind the nuclear warheads that decimated Japan likely didn’t create it intending to be used. But it was used. And in the US, we can say it ended the war and we can have a lively discussion about that as being a fan of history I don’t deny it’s true. And the bombing likely saved a great deal of lives.
But I’m an American. My guess is if you asked someone from Japan how they feel on this subject they may answer you differently.
You can read the Mashable piece if you want some definitions around “hard AI” or other types of this fascinating, and revolutionary science. And, like any technology, it isn’t inherently good or evil — tech is just tech. But the bias of the creators can make it shift in terms of how it may be used. The simplest example I can provide here is militarized AI which is pretty simple: AI and robot soldiers mean that less humans have to be killed in the line of duty. To that, I say, amen. However, this noble cause should not lead us to think nobody will be killed in the line of fire from autonomous soldiers. “The enemy” and surrounding collateral-damage-civilians will be the ones to perish. So let’s get this straight — odds are, weaponized AI means that ostensibly less people will be killed, including civilians. But, as with all things military, the tech will be used on both sides of any conflict. So, “God is with us” as a war cry switch to, “algorithms will kill based on the collective intelligence of our bias infused with updated algorithms!” It’s not as catchy a phrase.
This piece was inspired by a recent post in The Guardian, Are the Robots about to Rise featuring a detailed interview/description of the work of Ray Kurzweil. Kurzweil is a legend in techie circles, coining the term, “The Singularity” referring to the time when machines will achieve human level sentience which he thinks is likely to happen by 2029. You can read specifics in their article, but he’s now Google’s Director of Engineering. He’s been given carte blanche to make his dream of AI and The Singularity come to pass as fast as the resources of Google can allow. Here’s a quote from the Guardian piece to show how Google is also working to accelerate its focus on AI:
Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an "undisclosed" but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.
And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who's probably the world's leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was "a Manhattan project of AI". If artificial intelligence was really possible, and if anybody could do it, he said, "this will be the team". The future, in ways we can't even begin to imagine, will be Google's.
So… “a Manhattan Project of AI.” Interesting that there’s no ambivalence about the notion of connecting AI directly to the nuclear project that led to Japan’s decimation and the nuclear age, justifying the Cold War, etc.
I interviewed James Barrat, author of Our Final Invention about his book focused on AI which is a really fantastic read. He’s the one who provided me with the comparison of the Manhattan Project to AI which I think is fair, versus simply hyper-paranoid “I fear the future” sort of language. We’re allowed to be cautious about a technology that’s replacing our reliance on THINKING. We’re allowed to wonder what happens when Google potentially creates and owns the technology behind achieving human sentience. They’re fairly close already, in terms of providing us with answers to our search queries based on our past history and the combined knowledge of billions of people every day, hour, and second.
So there’s the thing about ethics. What most people say regarding AI, including myself is the following: 1) The tech behind AI is moving faster than any discussion or implementation of ethics around it (there’s not a lot of money to be made in the ethics biz), and 2) We should try to understand the potential ramifications of AI achieving sentience, etc, before it fully comes to fruition.
The potential bad news — AI is inevitable. Period. Same with augmented reality and facial recognition tech becoming fully ubiquitous. Happy to argue dates, or who will have it when, or will it be owned only by the rich, and so on. Great discussion to have. But whether or not these technologies will be fully functional, even the ones to provide the infrastructure for AI becoming sentient, isn’t a discussion of “if” but “when.” So I for one would love to accelerate this ethics discussion. I for one would love to see people not be accused of being paranoid or Luddites or the usual pat responses thrown at anyone who doesn’t understand how AI works or what it could mean to our humanity.
Let me be clear — none of us fully understand what it means for AI to achieve sentience. I got a ton of great comments on my Mashable piece, the most logical of which focused on the fact that humanity will evolve along with the tech of AI. So in the same way people historically feared cars and then adjusted to them, or even thought leeching was cutting-edge medical science back in the day, as a race we’ll be ready for machines to advance past our thinking capacity or essentially become a new form of living being in our midst.
Bullshit. It’s 2014 and the majority of people I speak to about technology don’t understand how their personal data is accessed online. It’s 2014 and most people I know don’t understand what Augmented Reality is even after Glass has been around for almost a year. It’s 2014 and a good deal of the planet, while they have access to smart phones, don’t have access to clean water or sanitation. You think they’re avidly discussing the ramifications of AI? How algorithms can better our world without replacing or supplanting any of our core humanity?
Why this discussion scares me and pissed me off so much is I can’t provide a solution here that would rival Google literally buying the aggregate and global uber-intelligence surrounding AI. I have no interest in halting the majority of AI work that’s helping to decode cancer DNA strands or help capture online sex offenders. But I do feel it’s necessary to call BULLSHIT on the tech pundits or smarmy geekish types (myself included at times) who won’t let discussion around the ramifications of AI happen without making people feel like they’re backward tech-haters simply for asking.
- (Precedent) Most of us drive with GPS these days. Maps are gone. We’ve lost a lot of the angst around driving, but also the serendipity of discovering new places along the road.
- *(Precedent) Most of us don’t remember long telephone numbers any longer because we don’t need to. They’re stored in our phones. What else has been replaced we don’t need to think about any longer in this way? People’s names outside of our close family and friends? Passwords?
- *(Precedent) Dating is already infused with algorithms more than any other industry (besides the military). Right now the focus is largely on finding potential people for you to date that would be a good match. But soon the microphones in our smartphones will listen to our conversations and let us know when the person we’re dating statistically looks to be a bad match for your future. Here’s a text you’ll get on a first date: “Hey. So according to what she’s saying based on a billion other speech patterns like hers in dating situations, she’s batshit. There’s a 90% chance she’ll be asking you to meet with her parents next week and coming up with baby names on your third date. Take her home at your own risk.”
- *(Future) It is dead simple to think about parenting by algorithm or with AI. “Siri, should I spank my kids? What do other people do in my digital circle of friends do?”
- *(Future) Forget about surveillance, with tools like Google Glass or other augmented reality enabled lenses that can look at the pupils of your eyes to gauge emotional response to your surroundings. Think about the experience in any typical office where a lewd look by a worker at a colleague will result with an IM or email noting said behavior and a warning or a pink slip.
I could come up with more of these, or you can watch Minority Report. However, what you can also do:
- Learn more about AI. Most of the articles you’ll read end up with the same type of ending as the Guardian piece on this issue: “Because the future is almost here. And it looks like it's going to be quite a ride.” That’s how you have to end a piece as a journalist writing about AI. You can’t write something like, “HOLY FUCK — THIS GUY IS WORKING WITH FUCKING GOOGLE TO LITERALLY CREATE THE BORG THAT WILL OWN OUR MINDS, OUR CHILDREN’S MINDS, AND OUR FUTURE. HOLY FUCKITY FUCKING FUCK.” (The Guardian won’t print words like “fuckity.”
- Allow yourself to tell techie friends (me included), “I’m scared shitless by this stuff and I don’t see how you ethically can push the boundaries of this field. You are permitting the loss of what makes us human happen faster than ever before by justifying it with language about it’s inevitability. That’s the same type of language used in most wars throughout history.”
- Allow yourself to not just ruminate about a Terminator like future, but ask and really think about what happens when machines achieve sentience and resources become scarce in the future. Machines, as a rule, do require power. Do I think they’ll start shooting people who unplug them to run their appliances in the future? No, but certain humans will be (are already) prioritizing which machines are more important than certain humans based on algorithms, individual biases, and fallible human will.
And let’s make ethics pay as well as tech, shall we? Can ethicists be hot? Can we get Pharrell Williams to do a ditty about ethics for a movie soundtrack?
Otherwise in 2040 or so a machine will be writing this piece talking about the incident that mirrored Hiroshima back in the day. But it won’t give a shit about what you or I think as they will have well surpassed us mentally at that point anyway. And likely the fact that I wrote this article means I’ve either been killed, or hopefully heralded as working to more deeply explore an issue than just making jokes about it coming or avoiding the fact that saying, “we should manage the ethics around this before moving forward with the tech” and still moving forward with the tech is questionable ethics at best, and mendacious at worst.
Email me when John C. Havens publishes or recommends stories