Is Artificial Intelligence Really Our Final Invention?

Cognilytica
Cognilytica
Published in
4 min readOct 3, 2017
Image source: James Barrat

The AI Today podcast recently sat down with James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era‎” for a podcast interview to discuss James’ book and thoughts on the subject of AI. Barrat discusses why he wrote the book 3 years ago now, how far away he really thinks we are from artificial human intelligence, the warning bells recently being sounded about artificial intelligence, and why he thinks there will not be another AI winter. Below is an excerpt from the interview. You can check out the full podcast, and a full transcript, on the Cognilytica site here.

AI Today: I’d like to get started by having you introduce yourself and tell us a little bit about your book and also what additional things that you’re doing in the field of AI.

James Barrat: I’m a documentary filmmaker primarily and an author and speaker. I got into artificial intelligence, or the study of artificial intelligence, and the critique of AI because I made a film about 17 years ago now about artificial intelligence. I interviewed Ray Kurzweil and Rodney Brooks and Arthur C. Clarke among others … and Ray Kurzweil of course who is now chief engineer at Google and the Google brain project. He was very optimistic about AI and thought that it would bring in a period of utopian time when most of mankind’s problems will be defeated including mortality. Rodney Brooks was not quite that rosy but he was still very optimistic. He thought robots and AI will be our partners never our competitors. But Arthur C. Clarke who was a scientist before he was a science fiction writer said something like we steer the future not because we’re the fastest creature or the strongest creature but because we’re the most intelligent. And when we share the planet with something more intelligent than we are it will steer the future. Until then I’ve been pretty besotted with AI and I still am I still think it’s a terrific set of technologies with a great deal of potential good. But at that point some skepticism entered my mind and it just festered and I started interviewing people to make AI and ultimately came out with (my book) “Our final invention: Artificial Intelligence and the End of Human Era”.

AI Today: One of the feedback comments about your book was that folks were saying you’re highlighting a lot of challenges and problems, showing us the potential path to this vision of super intelligence, but you don’t really talk too much about solutions and it may be really hard to put the cork back in this bottle. We’ve already sort of released the genie and now basically it’s a matter of just dealing with it.

James Barrat: I use this example a lot, it is like fission. In the 1920s and 30s people thought the biggest most respected physicists didn’t think nuclear fusion was possible and then it was and then it was weaponized and we incinerated two cities with bombs and we held a gun at our own heads as a species throughout the whole nuclear arms race. And what do we have today? We have this insane dictator in North Korea threatening to use nuclear weapons. We had no maintenance plan for that technology. Right now we have no maintenance plan for this [AI] technology and this technology is actually more sensitive than fission. This is the technology that invents technology. So I didn’t have any solutions and I don’t pretend to have a lot of solutions now. I think one of the keys is probably getting the word out to a lot of people so that the government at some point steps in with regulation and I’m the last person to recommend that for anything. First of all how do you educate Congress about these technologies and then how do you get any meaningful legislation passed. Right now it’s probably too early anyway but if we just horse around like we did with fission before we know it we’ll have some cataclysmic disaster.

AI Today: People want to assume that good actors are using AI to do good things so they’ll feed it good data. But what happens when bad actors feed AI bad data (malicious) and it learns off that? What happens and what should we be doing about getting AI out of hands of bad actors?

James Barrat: First of all the good actors aren’t that good. I mean as you mentioned there are huge biases in data sets. There are sexist biases, racial biases, there are biases from keeping minorities from getting bank loans. If you feed the pictures in the neural net pictures we have at hand, you will believe that all doctors are white men. So there’s a huge amount of potential abuse just in creating giant data sets and using big data. And then the good actors are not that good. Google has 400 lawyers because they get sued all the time. There’s been privacy law suits, there’s been copyright lawsuits. Google is a gigantic corporation that seems to have no real head. There are so many units to it that act independently. How do you keep that under control? And they have so much money that they shut up dissent. They had critique come from inside, those people get fired. They have had press, Forbes published an article that was critical of it and then Google had them take it down because they seemed to own a lot of Forbes. So I’m not sure if we have a lock on who the good actors are.

We’d love to hear your thoughts on this podcast and this subject in general so join the discussion on our Facebook Group AI Today (https://www.facebook.com/groups/aitoday/).

--

--

Cognilytica
Cognilytica

Real-World, Industry and Adoption focused Market Research and Intelligence on AI. Find out more at: http://www.cognilytica.com/