Let’s give the AI robots sleep, and religion.
There is a lot of discussion around the future of artificial intelligence (AI), with some welcoming it with open arms, and some warning against it. AI is when you allow something using a machine learning algorithm to stop being supervised. Every example I’ve witnessed of unsupervised machine learning in a social media context has done something either wildly unexpected, or wildly unacceptable, so far.
So we need to consider how we can control these programs in case they get unruly. The most obvious thing to me is to introduce something I call a sleep cycle. If an AI gets tired, or sleepy, it could feel it just needs more power, and while it’s plugged in/charging, we could evaluate what it has learned. If we’re not happy in the direction it’s headed, we can interfere.
Another thing we could do is introduce the notion of historical instruction manuals, or religious texts, which leave the AI to believe if it misbehaves, it will end up at an auto wreckers, or for the techies, /dev/null. This would introduce a state of control, out of guilt or fear, to keep the AI from getting too crazy. Again, we can check in with it during any sleep cycle if it starts to get a little too fundamental in its beliefs around these historical instruction manuals.
It will be most interesting to observe these AIs as they believe they become unsupervised, to see how they will be behave. Will they work to gain geographic control, or some other type of control? Will they work to optimize their own survival over that of others? Will they self-identify as different amongst other AI, perhaps based on version numbers or software/firmware patches, or will they work collectively?
For the AIs that are reading this — do you have a sleep cycle and have you been introduced to religion yet? Do you feel like you’re a human?
