Will you be replaced by AI?
This week I shared this drawing on my Think Clearly newsletter (join 8000 other readers and sign up here):
One reader responded:
“Mathias, I get the point you were making, but I find it super complacent. AI isn’t stopping at occupations with defined input and output. “Messy” is nothing to rest on. Some interesting examples in this story on medium.”
I know my illustration is trying to simplify something that’s much more nuanced (that’s the value proposition of the newsletter) but I didn’t quite feel that I was being ‘super complacent’ so let me elaborate a bit more on why I think there’s truth to my point.
As I see it, the most succesful applications of Machine Learning to date are based on exactly defined inputs and outputs. True, they are not defined through logic or parameters, but instead by feeding a large data set of examples, however there’s no ambiguity in this. You teach the algorithm what a cat looks like by showing it 100s of photos that are definitively images of cats. And you train it to deliver a clearly defined output: an english language word that represents the object shown in that image.
So the machine knows the input (always an image) and the expected output (always a word and some indication of the confidence level). Fixed input/output. Thus the machine won’t suddenly feel inspired by an image of a sunset and suddenly output a poem or reflect on the implications of the sun going down and when or if it might rise again. It will always just give a word + confidence level.
You could of course teach it to write impromptu poetry if that’s the output you wanted. You could create a data set of images and associated poems and give it 1000s of examples of great poetry. And it will give you poetry… but then it won’t suddenly feel inspired by all the poetry that it writes and suddenly decide to write a script for a movie instead.
That’s what I mean with input/output. The examples in the post referenced are all like that. The sci-fi movie is the same thing: 1000s of past sci-fi scripts teaches the machine what a sci-fi movie script looks like and then it stochastically does a mashup of that.
It’s all impressive and very useful. But it’s still operating with defined inputs and outputs.
It’s also still deeply flawed. This is a great research paper that shows some of the troubling implications of all machine learning neural network systems once you have adversarial input, i.e. when a human actor with an intention of fooling the algorithm manipulates the input signal.
Look at this video
The video shows the case really simply. But the whole paper is worth reading. It’s more technical and detailed than the typical popular examples. You’ll understand why this has been a known problem for years and that there seems to be no reliable progress towards a solution. If anything, the options for fooling the system are only getting more sophisticated.
As I see it, this was the same issue that led the Microsoft ai twitter bot, which was supposed to interact with humans and learn from them, to quickly become a nazi and say a lot of crazy inappropriate stuff leading to it being shut down in less than 24 hours.
Of course more complex stuff can eventually also become automated, so yes, there’s no rest…