Artificial Stupidity: Behind the Story

If you haven’t already, you should read Artificial Stupidity’s first chapter before continuing.

Artificial intelligence is a subject of much debate around now, and it’s a subject I’m personally interested and involved in. The thing about intelligent robots is that they’re so much more than we give them credit for — at high degrees of accurate intelligence, they’re getting towards being human. Not only is that a scary concept, but it’s also one I wanted to write about because it’s so interesting. Artificial Stupidity is my short story on the topic.

Perhaps the best way to summarise my plan for Artificial Stupidity is to write it a blurb.

56 years ago, intelligent machines destroyed London. It’s 2156, and a small team of experts are tasked with making a new breed of AI — but can it really be made safe?

Something like that. Essentially, I focus on a small team of experts who are alienated for the work they do, who are trying to make AI safe for public use. The issues surrounding AI are there in abundance, and I’m trying to explore those to create what I hope is a decent short story.

As to how I’m doing it — well, you’re reading from Universe Factory here. Worldbuilding Stack Exchange has a whole section dedicated to questions about artificial intelligence, which is a pretty good start.

The AI in my story — Logie — is a self-improving AI. There’s plenty of stuff around the Internet about the dangers of that, and the dangers of giving AI supposedly benign goals such as creating paperclips. That’s a major part of the story — how does this team recognise and overcome that? I haven’t yet planned that out, but in that I’m sure I’ll be asking the folks over on Worldbuilding to suggest some things and answer some questions of mine.

The previous disaster angle is an interesting one, too. It’s surprising enough that just 56 years after AI killed probably 8 million people in London, the world is clamouring for it again. The expert team know that, and they know it’s their heads on spikes if they get it wrong. Surely that affects their behaviour, which is an issue I might introduce at some point.

I think I've made another good choice in setting this story in the future. 2156 is a long way off, and who knows what might be around by then? There are currently multiple predictions, ranging from “the human race will be extinct by then” to Back to the Future-esque highly technological and societal advances. Well, it’s clear enough that the first of those predictions isn’t true, but I don’t think the second will be, either.

The setting itself for this story is Earth. That’s fairly boring, when you consider I’ve got the expertise of a few thousand worldbuilders at my fingertips (literally — all I have to do is type up a question and worldbuilding ideas come rolling in). However, Earth in the future could be quite different to how it is now, and over the coming chapters I intend to introduce settings other than the grey steel development warehouse that make that more pronounced.