A Byte-Sized History of Artificial Intelligence: The Good, The Bot, and The Ugly

Tarrin Skeepers
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
4 min readMay 10, 2023

Once upon a time, in a not-so-distant past, some ingenious individuals looked at their hefty calculators and thought, “Eureka! Let’s make these hunks of metal think!” Thus began the saga of Artificial Intelligence (AI), filled with epic highs, catastrophic lows, and a dash of humour that’s almost as dry as your phone’s autocorrect attempts.

Our tale starts with an old chap named Alan Turing, who, in the 1950s, asked a question that would set the stage for AI: “Can machines think?” Spoiler alert, Alan — they can, but not always as we’d like them to. Still, his Turing Test became a benchmark for evaluating if a machine can exhibit intelligent behaviour indistinguishable from a human. And no, your ‘smart’ toaster that burns your bread every morning doesn’t count.

Fast forward to 1956, a year that would go down in history as the birth of AI. A band of merry scientists, including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, convened at Dartmouth College. They didn’t just have summer fun; they coined the term ‘Artificial Intelligence’ and got the ball (or should we say, the bytes?) rolling.

But as in any epic story, our heroes faced adversity. The first ‘AI Winter’ hit in the 1970s when funding dried up faster than an iPhone battery. Apparently, teaching machines to think was a touch more complicated than they’d anticipated. Who’d have thought, right?

Despite this chilly setback, AI went on. Enter expert systems in the 1980s: computer systems emulating human experts’ decision-making abilities. These guys were the ‘cat’s meow’ of the AI world — until they weren’t. Turned out, their knowledge was about as flexible as a frozen pizza.

Into the fray came machine learning, led by pioneers such as Arthur Samuel and Tom M. Mitchell. Machine learning is the art of getting computers to learn from data. In other words, they gave computers the ability to learn from their mistakes, something we humans are still working on.

The early 2000s saw AI take a social leap. We met chatbots like ELIZA and ALICE, who, for all their quirks, were better conversationalists than some humans. ELIZA, in particular, was so good at mimicking a psychotherapist, some users poured their hearts out to her. Sorry, ELIZA, but that’s more baggage than any bot should bear.

In 2011, we met Siri, Apple’s voice-activated assistant. Siri became famous for weather forecasts, restaurant recommendations, and of course, her sassy comebacks. But Siri wasn’t alone for long; Google’s Assistant, Amazon’s Alexa, and Microsoft’s Cortana soon joined the party, kicking off the voice assistant arms race.

The 2010s also saw AI start to dominate games that were once human strongholds. IBM’s Watson trounced humans at Jeopardy in 2011, and Google’s AlphaGo did the unthinkable in 2016 by defeating a world champion Go player, a game far more complex than chess. The lesson here? Never challenge an AI to a board game unless you’re prepared to lose.

So, here we are today, surrounded by AI in our phones, cars, homes, and even our fridges. We’ve come a long way since Turing’s day, but we’re still discovering what AI can do. The best part? The journey is far from over. Just hold onto your toasters, because if the past is any indicator, the future of AI promises to be a wild ride.

Yes, AI has its quirks. It can be as stubborn as a mule, refuses to understand accents (I’m looking at you, Siri), and sometimes suggests the most outrageous autocorrects. But let’s be honest, it’s those oddball moments that make it all the more endearing, right?

AI has been a part of some pretty revolutionary stuff too. It’s helping scientists crunch enormous amounts of data in search of life-saving medicines. It’s giving us self-driving cars that promise to make traffic jams a thing of the past (Fingers crossed!). And it’s even enabling machines to compose music, though we’re still waiting for the first AI-created Billboard №1 hit.

The years ahead will likely bring even more innovations. Maybe we’ll finally get those AI personal assistants we were promised, the ones that can book our flights, cook our dinners, and laugh at our jokes, no matter how bad they are. Or perhaps AI will tackle the big stuff, like solving climate change or predicting stock markets. Who knows, maybe it will even master the art of making a perfect latte.

One thing’s for sure, as we move forward into the brave, sometimes bizarre world of AI, it’s essential to remember the words of one of its founding fathers, Marvin Minsky: “No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.”

So, let’s raise a toast to AI’s past, present, and future. To Alan Turing, to the Dartmouth Summer Research Project, to the tireless tinkerers who never gave up, and to the algorithms that brighten our days, whether by helping us find that long-lost file or by hilariously misinterpreting our voice commands.

In the grand adventure of AI, we’ve only just left the Shire. The road ahead may be fraught with ‘unexpected bugs’ and ‘AI winters,’ but it’s also filled with promise and potential. So buckle up, folks. If the history of AI tells us anything, it’s that we’re in for an exciting, unpredictable, and utterly fascinating journey. And personally, I wouldn’t miss it for all the bitcoins in the blockchain. To take the next step into the rabbit hole, read about the ABCs of AI here.

*All Text and images are generated with the assistance of AGI.

--

--

Tarrin Skeepers
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

Part time techie with a full time curiosity. Just trying to spread a little knowledge any way I can.