I hate AI

I don’t actually hate Artificial Intelligence (AI), just all of the sensationalist bullshit that surrounds it. I’ve started observing this contagious feeling spreading through the internet that any day now, computers and machine learning algorithms will surpass human intelligence to the point where we will become entirely dependent on them, fall under their control, and they will integrate with our nervous system to make us smarter, more efficient, etc. etc. etc…It seems like every week I see a video or article on social media where this seems to be the bottom line. Let me summarize quickly what I understand their logic to be:

“Computers are really complicated, and are changing rapidly. Humans are really complicated, but so too are they changing rapidly because of the internet. There’s this new company that makes robots able to recognize human emotions and respond appropriately. Also, my cousin programmed an algorithm that learned how to paint and write poetry. What’s to say that computers couldn’t do this better than humans in 5 years? Therefore, we live in a science fiction world where anything is possible.” (begin ridiculous speculation)

Look, I get it. The times they are a-changing. Everything seems to be moving so fast that it seems a little crazy sometimes. All this new…sharing of information…and communication!! Will it ever stop?? With all these tweets?? The singularity, bro!!! Pump the brakes. If you feed into this idea that everything is moving exponentially faster and faster without any sort of limit, you look a bit like Randy Marsh when Obama got elected: running around drunk with your shirt off, shouting about how everything is different now. I love Randy Marsh but you, you look like an idiot.

There are a couple reasons I feel this way. First and foremost: I don’t care what your techie friends say, long-lasting and substantive change (especially when it regards life and biology) happens slowly. AI research happens slowly. Rapid change happens, but is comparably flimsy and can be reversed with the wind direction. This metaphor is another discussion in itself, so I’ll leave it here for now. Intelligence evolved slowly as a tool to sustain life (more intelligent beings can more effectively find food, fight off predators, etc.), and life is old, like 3.5 billion years old on earth. The human brain is the pinnacle of this evolution, and is chock full of beautiful mechanisms that make it deadly efficient. In this light, doesn’t the image of some bald motherfucker in a turtleneck making the claim that he’s found a way to dramatically improve on the design seem a little ridiculous?

Integration of electronics into the nervous system is going to happen incredibly slowly, due to a long, tangential list of reasons that have to do with basic science and health regulations. These won’t be discussed in full here, but I will say this: it takes decades to get methods approved in the US’s tight medical system, and once you do get these methods approved, you still have to interface them with the most complex organ in the universe. It’s seriously not as simple as “turning up the dopamine” or “lighting up the reward center.” Once you get to the level of the cortex, there is still massive debate among neuroscientists about what different brain areas even do. In 2016, we still have such a hazy integrated picture about how the brain works that it’s absolutely ridiculous to think that we’ll be able to just pop some electrodes in and make them do anything specific (other than stop seizures! We can do that now). Although it is tantalizing to imagine massive-scale change overthrowing society overnight, the reality will likely be far more slow, subtle and boring. Sorry.

This urgency, this idea that all of this is going to happen tomorrow, seems even more ridiculous when you observe the fact that humans are fucking terrible at forecasting into the future. We’re really not very good at imagining how we’ll feel under future circumstances, or what our lives will be like in the future. If you’d like to look at the psychological literature, a google scholar search on either “optimism bias” or “affective forecasting” should give you a pretty clear idea about the limits of our fortune telling ability. It’s not that we’re stupid: indeed this capacity to mentally hang out in the future is what makes us so smart compared to other animals on the planet, but making accurate predictions about the future is incredibly difficult, and often just impossible. So let’s recognize this fact, and not put too much stock in anyone’s prediction: even if you’re a slick successful entrepreneur, your ability to see the future is probably just a hair better than everyone else’s, if at all. Exhibit A: the cover picture. This was in the late 1800’s, and was an artist’s depiction of what schools would be like in the year 2000. You could make the same picture of the year 2100: I don’t think this will happen by then, either.

Regarding AI specifically, I don’t think any of the algorithms in the world right now are smart. Nor do I think they are anywhere close to being smart. I don’t care if your algorithm can beat a chess champion, or recognize a picture of a plane, or do voice recognition. It’s not intelligent, because you still have to program it. You still have to give it rules to play with. Google’s algorithm just beat the world’s best Go player, but I’d be more impressed if the algorithm got bored and decided it wanted to play checkers, or backgammon, or just sit under a tree and smell flowers like Ferdinand the bull. We can make computers do amazing things, but we can’t make them care, and I’m not at all convinced this is right around the corner. Every time someone writes an algorithm that does this or that, it seems to get championed as evidence of intelligence, but in reality it’s just another example of a computer doing what it’s told. Computers and algorithms are fundamentally different from living things, because they can get turned off and on again, which makes them not care about their existence. If you turn the body off, the brain begins dying in about 5 minutes. Thus, humans have a strong motivation to not turn off, ever. This makes us care about things like food and shelter — we need to keep living, man! Computers don’t have this motivation. Because of this, they don’t care about existing. They don’t care about anything. They’re nihilists. And stupid.

It’s hard to tell why this issue annoys me so much. I kind of see people who speculate wildly about tech as the snake-oil peddlers of our time, writing books and shilling bullshit about how all of it is going to collapse on societies’ head and change our lives permanently, tomorrow. Or maybe later this afternoon, so buy the fucking book and seminar ticket before it’s too late! I bet these same idiots are probably getting millions of dollars of DARPA grant money by making false promises that are based upon their idiot philosophies. “Yes, that’s nice, and your marketing packet for mass-scale mind control looks very slick, but what I really need is a robot to clean my room and do my chores…what do you think?”