The Artificial “Intelligence” Scam is Imploding

james seibel
10 min readFeb 17, 2020

--

Countless articles in the mainstream press prattle on about imminent job losses, labor automation, self-driving cars, entire industries losing hundreds of thousands of workers, and on and on and on… it’s total horse shit!

The AI industry is designed to raise dumb money from dumb investors and enable tech CEO’s to masturbate over press calling themselves visionaries and futurists. The AI delusion is finally coming apart at the seams as its lies and exaggerations are revealed and its investors are forced to recognize losses of billions in invested capital.

Some recent headlines from the rapidly deflating AI bubble:

It’s 2020. Where are our self-driving cars? Vox

SoftBank’s $375 Million Bet on Robotic Pizza Went Really Bad Really FastBloomberg

CEO of Volkswagen’s Autonomous Driving Division Admits Full Self-Driving Cars ‘May Never Happen’ The Drive

How IBM Watson Overpromised and Underdelivered on AI Health Care. IBM’s AI seemed poised to revolutionize medicine. Doctors are still waiting IEEE Spectrum

This is only the beginning. While the outright theft of investors’ capital is pretty hilarious if you’re the type of person that gets pleasure when stupid rich people get conned by Adam Neumann and Elizabeth Holmes acolytes (and it seems that many do), what’s absolutely criminal is how misplaced AI-fear has caused widespread societal anxiety. People actually believe that their jobs will be taken next month by robots!

This ludicrous belief is so common that Andrew Yang, a presidential candidate, centered his whole campaign platform on “Universal Basic Income” (which he calls the “Freedom Dividend”), which gives all citizens $1000 a month to compensate for the supposed oncoming tidal wave of automation and AI-caused job losses. As Yang states on his official FAQ:

Andrew Yang wants to implement the Freedom Dividend because we are experiencing the greatest technological shift the world has ever seen. By 2015, automation had already destroyed four million manufacturing jobs, and the smartest people in the world now predict that a third of all working Americans will lose their job to automation in the next 12 years. Our current policies are not equipped to handle this crisis.

You can relax. While Yang’s opinion is (sadly) widespread, his opinion is just part of the massive delusion that’s been perpetuated by “the smartest people in the world” who orgasm when Andrew Yang-types worship them and their nonsense futurist predictions.

If you are exhausted from AI-related anxieties, please read on for the contrarian viewpoint — it may give you some peace of mind!

The AI Investment Thesis

If you buy into the AI-hype bullshit, then the investment thesis is a no-brainer:

  1. AI is a disruptive technology that will eliminate hundreds of millions of jobs and entire sectors of labor by automating tasks that (previously) only humans were smart enough to do.
  2. Owners of new AI technology will reap out-sized rewards as labor is replaced with genius-level software and robots that ceaselessly learn and improve.
  3. Every industry will be affected, from truckers to doctors to lawyers to pizza shops.
  4. Therefore, throw as much capital as possible in all AI companies targeting every industry — it’s a sure thing!

This sounds great! Unfortunately…

“Artificial Intelligence” is Bullshit

Artificial Intelligence (AI) is a rebranding of the term “Machine Learning” (ML). “Neural Networks”, the most popular technology used for AI, evokes the impression of a biological brain that constantly learns from the knowledge (data) we feed into it.

Artificial “intelligence”, “neural” networks, machine “learning” — all of these terms are marketing speak designed to fool the naive investor. They imply that this technology is literally creating human brains in software. However, this is 100% bullshit and could not be farther from the truth.

Neural networks are a clever trick. In a nutshell, neural nets are a mechanism for automated decision making. Neural nets are “trained” by feeding in data (i.e. images) and telling it what the correct answers are (this image a cat, that a duck, this one a fish…). The network itself is a series of nodes, each acting like a gate. As inputs (images) flow through this network, each gate determines where it should flow in order to get to the correct output (this image is a duck!). “Training” the network is modifying each gate to more accurately lead to the correct output.

Visualization of a Neural Network

The more nodes in the network and the more data its fed, the more the network knows, and the more likely it is to properly categorize a new image it has never seen before.

To summarize, a neural network operates like a flowchart, with each gate making a decision about where to send incoming data (an image) as it travels to the final decision (“this image a duck”). It has a probability of correctly identifying new data, but it is quite possible that a new image of a duck gets incorrectly identified as a cat (or a dog, or a fish).

That’s it! That’s the whole magic of neural networks, and by extension the entire modern incarnation of AI. The more data you feed a neural network, the more accurate it is in its predictions provided that future data is similar to previous data.

And therein lies the problem! Artificial “intelligence” is not actually intelligent, as one would judge intelligence. AI cannot learn — it only understands data fed into it. AI cannot adapt — if it encounters something new, it doesn’t understand what it’s seeing. AI doesn’t have the capability to “understand” — there is nothing intelligent going on under the hood of an AI program, no deeper context of meaning.

A human can see a duck covered in pink paint and know intuitively that it’s still a duck, albeit one covered in pink paint. A neural network trained on 1 billion images of ducks, but never one covered in pink paint, will not identify the new pink duck as a duck. The neural network does not truly understand what a duck is — it is a flow chart that makes decisions based on previously seen data, nothing more and nothing less.

AI — What is it Good For?

If you’re still with me, you now know that AI is not true intelligence. It is a relatively new and powerful tool in the computer science toolbox, but it’s not an artificial brain. So what is AI (neural nets) actually good at?

AI performs extremely well at games. Games are closed systems with strict rules that never change. AI enthusiasts hailed the victory of a program defeating the top human player in the board game “Go” — a game with literally 2 rules — as a stunning achievement of AI.

While this is indeed a victory for computer science, the difference in magnitude between a board game and what AI tech visionaries promised us — self-driving cars, robo-doctors, and widespread job losses by next Monday— cannot be overstated.

The real world is nothing like a game. It doesn’t have simple rules, or even any rules at all (beyond the laws of physics). If an AI program crashes while playing Go, no damage is done. If an AI program driving a car crashes, the car actually crashes — and people die.

It took a team of the world’s-best computer scientists years of work armed with the best computers ever invented — and all they accomplished was the ability to win at a game with 2 rules on a flat board with 361 squares. An impressive achievement, but not the same as driving a car! Not even close!

AI performs best when you remove as many variables as possible. If Go was not just 2 rules on 361 squares, but also had the chance of a child randomly appearing and rearranging all of the game pieces at any moment, certainly this would be a different game! How would an AI be able to handle this?

Well, the rules of Go don’t allow for pieces to be rearranged by wayward children. Go has extremely strict rules that eliminate any potential for random events. And this is the defining difference between games — where AI is extremely successful — and the real world — where AI is an abject failure.

Self-Driving Cars — the King of AI Lies

Car driving is one of the most complicated tasks that humans casually do on a daily basis. It’s an extraordinarily dangerous task — humans driving 3000 pounds of metal at 70 miles per hour while juggling their cell phone and screaming children — and involves a huge amount of intuition, learned behavior, and ability to deal with rare events.

Think about your entire history of driving. Have you ever seen a car accident happen in front of you? A fallen tree? A downed power line? A flooded road? Heavy fog? Black ice? A dog running into the street? Or a child?

I’ve seen groups of motorcycles weaving around traffic. While driving in Southie (I live in Boston) a drunk driver fell asleep and rolled into the intersection, necessitating me to veer suddenly to avoid being hit. Speaking of Boston — do you have any idea how crazy our roads are?

Driving in Boston is a Wonderful Experience

Boston during construction season is full of detours, road-workers, painted lines that are no longer accurate, and surly Bostonians jaywalking at every opportunity. During Boston’s other season (winter) it’s full of potholes, ice, snow piles, painted lines that are no longer accurate, and surly Bostonians jaywalking at every opportunity.

Driving in Boston is a passion sport. Like Formula 1 racers, Boston drivers see a green light and accelerate to maximum speed immediately. There is zero patience for slow drivers. It’s a peer-reviewed fact that middle fingers and screamed obscenities per mile driven is higher in Boston than anywhere else.

I go on this tangent to illustrate a point — How in the ever-loving fuck is a car going to drive itself in Boston? It’s simply not possible. Properly driving in Boston requires breaking every law on the books. The moment a law-abiding self-driving car arrives here is the moment riots break out in the streets. City Hall will ban them immediately to protect the mayor’s reelection.

“But wait!” you ask me. “Maybe self-driving cars can’t work in Boston, I’m convinced you guys are certifiably insane. But surely they will work in [super boring suburb in Arizona]?”

“No, you silly ignoramus,” I respond. “Haven’t you read my rant thus far? You are still deluded by the lies of self-gratifying transhumanists. Have you been duped by yet another Yuval Harari book?”

Let me remind you that:

  • 1. AI is not intelligent.
  • 2. It took millions of dollars and years of effort for the smartest computer scientists in the world to make a program to win at Go, a game with 2 rules and 361 squares.
  • 3. Driving a car is one of the most complex tasks humans regularly do, whether in Boston or [super boring suburb in Arizona].

The theory behind self driving cars is that you load up a minivan with a million sensors, a super computer, drive it like a grandma, and you will be able to make it safe for any situation that comes to it.

But this is impossible! No matter how much data you train a neural network with, you will never encounter all possibilities. You can’t restrict the real world like a game. Cars don’t drive on 361 squares of a flat board — cars use 361 thousand streets with 361 million miles of all different terrains, elevations, and weather conditions. You can’t force humans (or children, or dogs, or tree limbs) to follow strict rules. There is no data set that can supply all of the knowledge required to predict all future events.

And even if a self-driving car could come close — if a self-driving car loaded with $2 million of sensors and computers could manage to drive like a grandma on the safest, driest, most well-paved streets in the most boring Arizona suburb — it will never be close enough. There will always be the toddler that runs into the road.

If self-driving cars have any chance of succeeding, it will be by restricting them to locations that eliminate as many sources of random events as possible. A private track that has no human drivers, no pedestrians, no bikers, and all external guides (lights, stop-signs, speed limits, construction workers) are equipped with hardware that speaks directly with the car. This could be in a gated retirement community, or perhaps a planned city designed from the ground up to work with these new cars at the cost of free human movement — no walking, no biking, no children playing, and no ability to drive yourself anywhere. Is this a world we want to live in? It’s not for me, but maybe for the bar drunk who needs a safe ride home, or the pasty tech millionaire wuss who finds the real world too dangerous, and tells us that outlawing freedom in favor of robot taxis is for our safety.

Self-driving cars will never work in our current streets. Unless we make our world as close to a board game as possible — one with strict rules, defined squares, and perfectly rational actors — it’s just not a solvable problem, no matter how many billions of dollars investors keep throwing at it.

Conclusion

Artificial intelligence is the rebranding of previous machine-learning techniques to trick investors into believing the technology is literally recreating biological human brains. Instead, the current state-of-the-art in AI (neural networks) allows making predictions about the future with a certain degree of accuracy, provided all of the data fed into it is similar to the data it was trained on.

AI cannot learn, AI is not capable of true understanding, and AI’s actual use-cases in 2020 are far less exciting and much more mundane. The breathless hype written about AI in the popular press and shamelessly repeated by tech founders has enabled the vast fleecing of rubes in the investment community and unnecessary anxiety in the population.

While I don’t deny that neural networks are fascinating and have potentially many use-cases, the promised benefits and disruptive innovations have been overstated by at least 1 billion percent, and the speed at which these society-destroying changes are supposed to appear is the biggest lie ever perpetuated on society by the tech industry. Andrew Yang ran his entire presidential campaign on the idea of a “Universal Basic Income” of $1000 per month given to every citizen as compensation for the supposed oncoming robot apocalypse. This “robocalypse” is not going to happen and shows how technologists have unnecessarily woven such anxieties into the fabric of popular thought.

Next time someone casually mentions how AI and robots are on the cusp on eliminating all of our jobs, gently remind them to brush their teeth because their breath reeks of horse shit.

James Seibel, February 2020

--

--